Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
The virtual-real fusion display method disclosed by the embodiment of the disclosure can be applied to information display of indoor and outdoor scenes such as superstores, transportation hubs (e.g., airports, railway stations and passenger stations), hospitals and large exhibition halls, and the virtual content is displayed in the real image in a virtual-real fusion display mode, so that a user can obtain required information based on the virtual content at the current position, and the convenience of obtaining relevant information by the user is improved. The virtual-real fusion display method can be realized through the first electronic equipment and the second electronic equipment. For example, the first electronic device may include a cloud server and the second electronic device may include a terminal device.
Fig. 1 shows an interaction diagram of a virtual-real fusion display method according to an embodiment of the present disclosure. As shown in fig. 1, a user may hold or wear the secondelectronic device 11, and when a current location needs to obtain related information, the user may acquire an environment image of the current location through an acquisition component (e.g., a camera or the like) in the secondelectronic device 11, and position a current first geographic location through a signal Positioning component (e.g., a Global Positioning System (GPS) module, a Real Time Kinematic (RTK) Positioning module, a Wireless Fidelity (Wifi) Positioning module, a bluetooth Positioning module or the like) in the secondelectronic device 11, and further send a visual Positioning request to the firstelectronic device 12 based on the environment image and the first geographic location where the secondelectronic device 11 is located.
The firstelectronic device 12 stores therein a first point cloud map corresponding to a target area (e.g., a mall interior area, an airport interior area, a city area, etc.) where the first geographic location is located, and a virtual scene map corresponding to the first point cloud map. After receiving the visual positioning request sent by the secondelectronic device 11, the firstelectronic device 12 may perform visual positioning according to the environment image, the first geographic position, and the first point cloud map, to obtain a visual positioning result including the second geographic position and the posture information of the secondelectronic device 11.
In addition, the firstelectronic device 12 may also determine the virtual content to be displayed in the virtual scene map corresponding to the first cloud map according to the second geographic location, and then return the visual positioning result and the virtual content to be displayed to the secondelectronic device 11.
The secondelectronic device 11 realizes the virtual-real fusion display of the live-action image and the virtual content according to the second geographic position and the posture information included in the received visual positioning result, so that the user can acquire the required information based on the virtual content at the current position, and the convenience of acquiring the related information by the user is improved.
The virtual-real fusion display method according to the embodiment of the disclosure is explained in detail below.
Fig. 2 shows a flowchart of a virtual-real fusion display method according to an embodiment of the present disclosure. The method is applied to a first electronic device, and as shown in fig. 2, the method includes:
in step S21, a visual positioning request sent by the second electronic device is received, where the visual positioning request includes an environment image where the second electronic device is located and a first geographic location of the second electronic device.
In step S22, a first point cloud map corresponding to the first geographic location is searched, and a visual positioning result of the second electronic device is determined according to the first point cloud map and the environment image, where the visual positioning result includes second geographic location and posture information of the second electronic device.
In step S23, a virtual scene map corresponding to the first point cloud map is searched, and virtual content to be displayed is determined according to the virtual scene map and the second geographic location.
In step S24, the second geographic position, the posture information and the virtual content are sent to the second electronic device, so that the second electronic device displays the virtual content in the live-action image of the display interface according to the second geographic position and the posture information.
In one possible implementation manner, the first electronic device may include a cloud server, and the second electronic device may include a terminal device. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, etc., and the method may be implemented by a processor invoking computer readable instructions stored in a memory.
After receiving the visual positioning request sent by the second electronic device, the first electronic device finds a first point cloud map for performing visual positioning on the second electronic device according to a first geographic position in the visual positioning request, and then obtains a visual positioning result of the second electronic device according to the first point cloud map and an environment image in the visual positioning request.
In order to implement visual positioning of the second electronic device at the first geographic location, the first electronic device needs to pre-construct a first point cloud map of a target area where the first geographic location is located.
In one possible implementation, the method further includes: carrying out three-dimensional spatial reconstruction on a target area to obtain a first point cloud map and a first geometric grid map corresponding to the target area, wherein the target area is a spatial area comprising a first geographic position; constructing a virtual scene map corresponding to the target area according to the first geometric grid map, wherein a first mapping relation exists between the virtual scene map and the first point cloud map; and determining a second mapping relation between the first point cloud map and the geodetic coordinate system.
In a possible implementation manner, reconstructing a three-dimensional space of a target area to obtain a first point cloud map and a first geometric grid map corresponding to the target area includes: constructing a local coordinate system according to the two-dimensional map corresponding to the target area; carrying out three-dimensional space reconstruction on the target area to obtain a second point cloud map and a second geometric grid map corresponding to the target area; and based on the local coordinate system, carrying out data alignment on the second point cloud map, the second geometric grid map and the two-dimensional map to obtain a first point cloud map and a first geometric grid map.
For example, the target area is subjected to three-dimensional space reconstruction, and a second point cloud map and a second geometric mesh map corresponding to the target area are obtained. And acquiring a two-dimensional map corresponding to the target area. The two-dimensional map may be a surveying and mapping map, a CAD drawing, and the like, and the target area may be an inner area of a mall, an inner area of an airport, an urban area, and the like, which is not specifically limited by the present disclosure.
And selecting a certain point in the two-dimensional map as an origin point according to the two-dimensional map of the target area to construct a local coordinate system, and aligning the second point cloud map, the second geometric grid map and the two-dimensional map according to the local coordinate system to obtain a first point cloud map and a first geometric grid map.
In a possible implementation manner, based on the local coordinate system, performing data alignment on the second point cloud map, the second geometric grid map, and the two-dimensional map to obtain the first point cloud map and the first geometric grid map, including: determining a first spatial transformation matrix between the second point cloud map and the two-dimensional map and a second spatial transformation matrix between the second geometric grid map and the two-dimensional map based on the local coordinate system; performing space transformation operation on the second point cloud map according to the first space transformation matrix to obtain a first point cloud map; and carrying out space transformation operation on the second geometric grid map according to the second space transformation matrix to obtain the first geometric grid map.
In one possible implementation, determining a first spatial transformation matrix between the second point cloud map and the two-dimensional map and determining a second spatial transformation matrix between the second geometric mesh map and the two-dimensional map based on the local coordinate system includes: determining third coordinate information of at least three second position points in the target area in the second point cloud map, and determining fourth coordinate information of at least three second position points in the second geometric grid map; determining fifth coordinate information of at least three second position points in a local coordinate system; determining a first spatial transformation matrix according to third coordinate information of at least three second position points in a second point cloud map and fifth coordinate information of at least three second position points in a local coordinate system; and determining a second space transformation matrix according to the fourth coordinate information of the at least three second position points in the second geometric grid map and the fifth coordinate information of the at least three second position points in the local coordinate system.
For example, at least three second position points in the space region are selected, third coordinate information of the at least three second position points in the second point cloud map, fourth coordinate information in the second geometric grid map, and fifth coordinate information in a local coordinate system constructed based on the two-dimensional map are respectively determined, the first space transformation matrix M1 is further determined according to the third coordinate information and the fifth coordinate information, and the second space transformation matrix M2 is determined according to the fourth coordinate information and the fifth coordinate information. The specific number of the second location points may be determined according to actual conditions, and is not specifically limited in this disclosure.
According to the first spatial transformation matrix M1, the second point cloud map data can be aligned to a two-dimensional map, and a first point cloud map is obtained; and the second geometric grid map data may be aligned to the two-dimensional map according to the second spatial transformation matrix M2, resulting in the first geometric grid map.
In one possible implementation, constructing a virtual scene map corresponding to the target area according to the first geometric mesh map includes: determining a virtual content editing space under local coordinates according to the first geometric grid map; and performing virtual content editing operation in the virtual content editing space to obtain a virtual scene map, wherein the virtual scene map comprises virtual content to be displayed in the real image of the display interface when the second electronic device is at the second geographic position.
Because the first geometric grid map and the two-dimensional map of the target area are subjected to data alignment operation, a virtual content editing space under a local coordinate system corresponding to the two-dimensional map can be determined according to the first geometric grid map, and a user can perform virtual content editing operation through the virtual content editing space in the first electronic device to obtain a virtual scene map. The virtual scene map comprises virtual contents to be displayed corresponding to a plurality of geographic positions in the target area.
In one possible implementation, the virtual content in the virtual scene map may include public information presentation facilities in the target area. For example, according to a local coordinate system corresponding to the two-dimensional map, the public information display facilities are edited at a plurality of geographic positions in the virtual scene map, so that when the user is located at the plurality of geographic positions in the target area, the public information display facilities serving as the virtual content can be obtained by the virtual-real fusion display method without going to a fixed geographic position where the public information display facilities are actually arranged in the target area.
In one possible implementation, the virtual content in the virtual scene map may include at least one of a building, a business, a service facility, and a billboard. For example, according to a local coordinate system corresponding to the two-dimensional map, information such as buildings, businesses, service facilities, billboards and the like is edited at a position corresponding to a target area in the virtual scene map, so that when a user is located at a certain geographic position in the target area, relevant information of the current geographic position can be acquired through the virtual-real fusion display method, and convenience in information acquisition is improved.
In one possible implementation, the virtual content in the virtual scene map is dynamically updated according to a preset period. For example, for a merchant a in the target area, recommended goods corresponding to each day within a week are preset, and therefore, in the virtual scene map corresponding to the target area, at a position corresponding to the merchant a method, virtual content corresponding to each day within a week is dynamically updated according to the recommended content corresponding to the merchant a, so that a user can obtain effective goods recommendation information according to the virtual content in the virtual-real fusion display.
Since the first geometric grid map and the first point cloud map are both subjected to three-dimensional reconstruction based on the target area and subjected to data alignment operation with the two-dimensional map of the target area, a mapping relationship exists between the first geometric grid map and the first point cloud map. Because the virtual scene map is constructed based on the first geometric grid map, a first mapping relation exists between the virtual scene map and the first point cloud map.
In a possible implementation manner, in order to implement virtual-real fusion display in a large-range target area, for example, virtual-real fusion display in a global area, a second mapping relationship needs to be constructed between a first point cloud map corresponding to the target area and a geodetic coordinate system.
In one possible implementation, determining a second mapping relationship between the first point cloud map and the geodetic coordinate system includes: determining a third spatial transformation matrix between the local coordinate system and the geodetic coordinate system; and determining a second mapping relation according to the third spatial transformation matrix.
In one possible implementation, determining a third spatial transformation matrix between the local coordinate system and the geodetic coordinate system includes: determining first coordinate information of at least three first position points in a target area in a geodetic coordinate system; determining second coordinate information of at least three first position points in a local coordinate system; and determining a third spatial transformation matrix according to the first coordinate information of the at least three first position points in the geodetic coordinate system and the second coordinate information of the at least three first position points in the local coordinate system.
For example, at least three first position points in the spatial region are selected, first coordinate information of the at least three first position points in a geodetic coordinate system and second coordinate information of the at least three first position points in a local coordinate system constructed according to the two-dimensional map are respectively determined, and then the third spatial transformation matrix M3 is determined according to the first coordinate information and the second coordinate information. According to the third spatial transformation matrix M3, the mapping relationship between the geodetic coordinate system and the local coordinate system can be determined, and since the first point cloud map is obtained after data alignment with the two-dimensional map in the local coordinate system, according to the third spatial transformation matrix M3, the second mapping relationship between the first point cloud map and the geodetic coordinate system can be determined.
In a possible implementation manner, the first electronic device stores a first point cloud map corresponding to the target area and having a second mapping relationship with the geodetic coordinate system and a virtual scene map having a first mapping relationship with the first point cloud map, so that the second electronic device located in the target area is subsequently visually positioned, and virtual-real fusion display is implemented on the second electronic device.
In one possible implementation, the first geographic location is a geographic location in a geodetic coordinate system; searching for a first point cloud map corresponding to a first geographic location, comprising: and searching a first point cloud map corresponding to the target space region where the first geographic position is located according to the second mapping relation.
For example, after the first electronic device receives the visual positioning request sent by the second electronic device, because the first geographic location included in the visual positioning request is a geographic location in the geodetic coordinate system, and a second mapping relationship exists between the first point cloud map corresponding to the target area where the first geographic location is located and the geodetic coordinate system, the first point cloud map used for visually positioning the second electronic device can be searched according to the second mapping relationship.
For example, the first geographic location where the second electronic device is located is a certain location in an airport, a second mapping relationship exists between a first point cloud map corresponding to the airport and a geodetic coordinate system, and according to the second mapping relationship, after the first electronic device receives the visual positioning request, the first geographic location included in the visual positioning request is a geographic location in the geodetic coordinate system, so that according to the second mapping relationship, the first electronic device can find the first point cloud map used for visually positioning the second electronic device.
In one possible implementation manner, determining a visual positioning result of the second electronic device according to the first point cloud map and the environment image includes: extracting characteristic information of the environment image; and performing visual positioning on the second electronic equipment according to the feature information of the environment image and the first point cloud map, and determining a visual positioning result of the second electronic equipment.
For example, the first electronic device, after receiving the visual positioning request, may extract feature information of an environment image included in the visual positioning request. For example, feature extraction may be performed on the environment image through a pre-trained neural network, so as to obtain feature information of the environment image. The present disclosure does not limit the specific manner of feature extraction.
After obtaining the feature information of the environment image, the first electronic device may perform feature matching on the feature information and the first point cloud map, so as to obtain a visual positioning result. The visual positioning result includes a second geographic position and attitude information of the second electronic device, the second geographic position may include position coordinates of the second electronic device, and the attitude information may include an orientation, a pitch angle, and the like of the second electronic device. The present disclosure does not limit the specific manner of feature matching.
In one possible implementation manner, searching for a virtual scene map corresponding to the first point cloud map includes: and searching a virtual scene map corresponding to the first point cloud map according to the first mapping relation.
For example, after the first electronic device finds the first point cloud map used for performing the visual positioning on the second electronic device, the virtual scene map corresponding to the first point cloud map may be further found according to the first mapping relationship, so that after the second electronic device is visually positioned based on the first point cloud map to obtain the second geographic location, the virtual content to be displayed corresponding to the second geographic location may be found in the virtual scene map.
In a possible implementation manner, the first electronic device returns the visual positioning result and the virtual content to be displayed to the second electronic device, so that the second electronic device displays the virtual content in the live-action image of the display interface of the second electronic device according to the second geographic position and the posture information in the visual positioning result. By the mode, the virtual-real fusion display of the live-action image and the virtual content in the second electronic equipment can be realized, so that a user holding or wearing the second electronic equipment can acquire required information based on the virtual content at the current position, and the convenience of acquiring related information by the user is improved.
According to the virtual-real fusion display method disclosed by the embodiment of the disclosure, virtual-real fusion display can be realized, and convenience of a user for acquiring related information based on virtual content is improved; the method can be applied to information display of indoor and outdoor scenes of superstores, transportation hubs (such as airports, railway stations and passenger stations), hospitals, large exhibition halls and the like, and the virtual content is displayed in the real-scene image in a virtual-real fusion display mode, so that the user can obtain the required information based on the virtual content at the current position, and the convenience of obtaining the relevant information by the user is improved.
Fig. 3 shows a flowchart of a virtual-real fusion display method according to an embodiment of the present disclosure. The method is applied to a second electronic device, and as shown in fig. 3, the method includes:
in step S31, a visual positioning request is sent to the first electronic device, where the visual positioning request includes an image of an environment where the second electronic device is located and a first geographic location of the second electronic device.
In step S32, a visual positioning result returned by the first electronic device and the virtual content to be presented are received, where the visual positioning result is determined by the first electronic device according to the environment image and the first geographic location, and the visual positioning result includes the second geographic location and the pose information of the second electronic device.
In step S33, virtual content is presented in the live-action image of the display interface of the second electronic device according to the second geographic position and the posture information.
In one possible implementation, the first electronic device may include a cloud server, the second electronic device may include a terminal device, and the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like.
In a possible implementation manner, when a user holding or wearing the second electronic device needs to obtain relevant information at a current location, an environment image of the environment may be collected by a collecting component (e.g., a camera or the like) in the second electronic device, for example, an image of a scene faced by the second electronic device is captured. The environmental image may be one or more images, or may be a short video including a plurality of frames of images, which is not limited in this disclosure.
Since the scene images corresponding to different geographic locations may be similar, for example, the shop B located in the mall a and the shop D located in the mall C are the same brand shops, and the decoration styles of the same brand shops are similar, the environment image captured by the second electronic device at the shop B is similar to the environment image captured at the shop D. In order to improve the positioning accuracy of subsequent visual positioning based on the scene image, a user holding or wearing the second electronic device can roughly position through a signal positioning component in the second electronic device to obtain a current first geographic position.
In one possible implementation, when the user holding or wearing the second electronic device is located outdoors, the current first geographic position can be obtained through rough positioning by a GPS positioning module and/or an RTK positioning module in the second electronic device; when a user holding or wearing the second electronic device is located indoors, the first geographic position where the user is located currently can be roughly located through a Wifi locating module and/or a bluetooth locating module in the second electronic device.
In a possible implementation manner, in step S31, the second electronic device sends a visual positioning request to the first electronic device, where the visual positioning request includes an environment image acquired by the second electronic device and a first geographic location obtained by rough positioning.
Because the first electronic device stores the first point cloud map corresponding to the target area (e.g., an airport internal area, a mall internal area, a city area, etc.) where the first geographic location is located, the second electronic device sends the visual request including the first geographic location to the first electronic device, so that the first electronic device can accurately find the first point cloud map for performing visual positioning according to the first geographic location in the visual positioning request.
In a possible implementation manner, in step S32, the second electronic device receives a visual positioning result returned by the first electronic device and the virtual content to be presented, where the visual positioning result is obtained by the first electronic device through visual positioning based on the first point cloud map and the environment image.
In one possible implementation, the visual positioning result includes second geographic position and pose information of the second electronic device. Wherein the second geographic location may include location coordinates of the second electronic device; the attitude information may include an orientation, a pitch angle, etc. of the second electronic device. Compared with the first geographical position roughly obtained by signal positioning, the second geographical position obtained by visual positioning can more accurately reflect the position of the second electronic device.
In a possible implementation manner, since the virtual scene map corresponding to the first point cloud map is stored in the first electronic device, after the first electronic device performs the visual positioning to obtain the second geographic position, the virtual content to be displayed, which can be displayed by the second electronic device when the current second geographic position is determined according to the second geographic position and the virtual scene map, can be displayed by the first electronic device.
In a possible implementation manner, in step S33, according to the second geographic position and the posture information, the virtual content is displayed in the live-action image of the display interface of the second electronic device, so that a virtual-real fusion display of the live-action image and the virtual content is realized, so that the user can obtain the required information based on the virtual content at the current position, and the convenience of obtaining the relevant information by the user is improved.
In one possible implementation, the virtual content is a public information presentation facility of a target area in which the second geographic location is located.
In one possible implementation, the method further includes: and under the condition that the virtual content is triggered, displaying the display content corresponding to the public information display facility in the live-action image of the display interface.
For example, the user holding or wearing the second electronic device is currently located at the second geographic location, the target area where the second geographic location is located is a venue provided with a meeting place, and the virtual content is a meeting place information display board of the venue, so that the user does not need to go to a fixed position where the meeting place is located in the venue, and the meeting place information can be acquired according to the virtual content at the current second geographic location.
FIG. 4 shows a schematic diagram of a display interface according to an embodiment of the present disclosure. As shown in fig. 4, the real-scene image of the display interface of the second electronic device is an environment image of the second location (a certain geographic location in the venue) where the user is currently located, and the virtual content shown in the real-scene image is the meeting place information display board, where the meeting place information display board is a virtual sign corresponding to the actual meeting place information display board.
When the meeting place information display board in the display interface shown in fig. 4 is triggered (for example, the user clicks the meeting place information display board in fig. 4), the detailed meeting place information corresponding to the meeting place information display board is further displayed in the live-action image of the display interface, as shown in fig. 5. FIG. 5 illustrates a schematic diagram of a display interface according to an embodiment of the disclosure.
For example, the user holding or wearing the second electronic device is currently located at the second geographic location, the target area where the second geographic location is located is an airport, and the virtual content is a flight information display board of the airport, so that the user does not need to go to a fixed position where the flight information display board is located in the airport, and the flight information can be obtained at the current second geographic location according to the virtual content.
For example, the real-scene image of the display interface of the second electronic device is an environment image of a second geographic location (a certain geographic location in an airport) where the user is currently located, and the virtual content displayed in the real-scene image is a flight information display board, where the flight information display board is a virtual sign corresponding to the actual flight information display board.
When the flight information display board in the display interface is triggered (for example, the user clicks the flight information display board in the display interface), the detailed flight information corresponding to the flight information display board is further displayed in the live-action image of the display interface. For example, the user can determine which gate needs to be visited according to the detailed flight information in the display interface at the current second geographic location, and does not need to visit the fixed geographic location where the flight information display board is located in the airport, so that convenience in information acquisition and information acquisition efficiency can be improved.
In one possible implementation, the method further includes: sending a positioning request for a public information presentation facility to the first electronic device in case the virtual content is triggered; under the condition that a positioning result sent by the first electronic equipment is received, determining a third geographic position of the public information display facility; and determining a navigation path from the second geographical position to the third geographical position of the second electronic equipment according to the second geographical position and the third geographical position.
Still taking the above-mentioned fig. 4 as an example, when the meeting place information display board in the display interface shown in fig. 4 is triggered (for example, the user clicks the meeting place information display board in fig. 3), the second electronic device sends a positioning request to the first electronic device for requesting to acquire the third geographic location of the meeting place.
And under the condition that the second electronic equipment receives the third geographic position returned by the first electronic equipment, determining a navigation path according to the third geographic position of the meeting place and the current second geographic position, and further indicating the user to go to the third geographic position where the meeting place is located from the current second geographic position according to the navigation path.
In one possible implementation, the method further includes: and displaying an Augmented Reality (AR) navigation path in the live-action image of the display interface according to the navigation path.
FIG. 6 illustrates a schematic diagram of a display interface according to an embodiment of the present disclosure. As shown in fig. 6, the AR navigation path is displayed in the live-action image of the display interface of the second electronic device according to the navigation path, so as to instruct the user to go from the current second geographical location to the third geographical location where the meeting place is located according to the AR navigation path, thereby improving the intuitiveness of the navigation route. For example, the AR navigation path includes AR arrows, AR navigation avatars, and the like along the navigation path.
In one possible implementation, the virtual content further includes: at least one of a building, a merchant, a service facility, a billboard.
For example, the user holding or wearing the second electronic device is currently located at the second geographic location, and the virtual content is to-be-shown content related to the current second geographic location. For example, buildings, businesses, services, billboards, etc. are currently included in the image of the environment in which the second geographic location is located.
FIG. 7 shows a schematic diagram of a display page according to an embodiment of the present disclosure. As shown in fig. 7, if the second geographic location where the second electronic device is currently located is a coffee shop, the virtual content is a virtual logo of the coffee shop, and the virtual logo is shown in the live-action image of the display page.
In one possible implementation, the method further includes: displaying recommendation information corresponding to the virtual content under the condition that the virtual content is triggered, wherein the recommendation information comprises: at least one of building information, merchant information, service guide, and marketing content.
For example, when virtual content (e.g., a virtual logo) in the display interface is triggered, more information corresponding to the virtual content is displayed. For example, the building information of the building in the current second geographic location, and the business information or recommendation information corresponding to the business in the current second geographic location.
FIG. 8 shows a schematic diagram of a display page according to an embodiment of the present disclosure. As shown in fig. 8, the virtual content displayed in the live-action image of the display interface is the commodity recommendation information of the merchant corresponding to the current second geographic location.
In the embodiment of the disclosure, the second electronic device sends the visual positioning request to the first electronic device according to the environment image and the first geographic position, receives the visual positioning result returned by the first electronic device and the virtual content to be displayed, and then displays the virtual content in the live-action image of the display interface of the second electronic device according to the second geographic position and the posture information of the second electronic device included in the visual positioning result, so that the virtual-real fusion display of the live-action image and the virtual content is realized, so that the user can obtain the required information based on the virtual content at the current position, and the convenience of obtaining the relevant information by the user is improved.
In the above embodiments of the method, the point cloud map refers to a high-precision map for performing visual positioning, and the geometric grid map refers to a high-precision map for adding virtual content to construct a virtual scene map.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a virtual-real fusion display apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any virtual-real fusion display method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions of the method portions are referred to, and are not described again.
Fig. 9 shows a block diagram of a virtual-real fusion exhibition apparatus according to an embodiment of the present disclosure. The virtual-real fusion display device is applied to first electronic equipment. As shown in fig. 9, theapparatus 90 includes:
the receivingmodule 91 is configured to receive a visual positioning request sent by a second electronic device, where the visual positioning request includes an environment image where the second electronic device is located and a first geographic location of the second electronic device;
the first determiningmodule 92 is configured to search for a first point cloud map corresponding to the first geographic location, and determine a visual positioning result of the second electronic device according to the first point cloud map and the environment image, where the visual positioning result includes a second geographic location and posture information of the second electronic device;
the second determiningmodule 93 is configured to search for a virtual scene map corresponding to the first point cloud map, and determine a virtual content to be displayed according to the virtual scene map and the second geographic location;
the sendingmodule 94 is configured to send the second geographic position, the posture information, and the virtual content to the second electronic device, so that the second electronic device displays the virtual content in the live-action image of the display interface according to the second geographic position and the posture information.
In one possible implementation, theapparatus 90 further includes:
the first map building module is used for carrying out three-dimensional space reconstruction on a target area to obtain a first point cloud map and a first geometric grid map which correspond to the target area, wherein the target area is a space area comprising a first geographic position;
the second map building module is used for building a virtual scene map corresponding to the target area according to the first geometric grid map, wherein a first mapping relation exists between the virtual scene map and the first point cloud map;
and the third determination module is used for determining a second mapping relation between the first point cloud map and the geodetic coordinate system.
In one possible implementation, the first geographic location is a geographic location in a geodetic coordinate system;
the first determiningmodule 92 is specifically configured to:
and searching a first point cloud map corresponding to the target space region where the first geographic position is located according to the second mapping relation.
In a possible implementation manner, the second determiningmodule 93 is specifically configured to:
and searching a virtual scene map corresponding to the first point cloud map according to the first mapping relation.
In one possible implementation, the first determiningmodule 92 includes:
the characteristic extraction submodule is used for extracting the characteristic information of the environment image;
and the visual positioning sub-module is used for carrying out visual positioning on the second electronic equipment according to the characteristic information of the environment image and the first point cloud map and determining a visual positioning result of the second electronic equipment.
In one possible implementation, the first map building module includes:
the local coordinate system construction submodule is used for constructing a local coordinate system according to the two-dimensional map corresponding to the target area;
the first map construction submodule is used for carrying out three-dimensional space reconstruction on the target area to obtain a second point cloud map and a second geometric grid map which correspond to the target area;
and the data alignment submodule is used for performing data alignment on the second point cloud map, the second geometric grid map and the two-dimensional map based on the local coordinate system to obtain a first point cloud map and a first geometric grid map.
In one possible implementation, the data alignment sub-module includes:
the first determining unit is used for determining a first space transformation matrix between the second point cloud map and the two-dimensional map and determining a second space transformation matrix between the second geometric grid map and the two-dimensional map based on the local coordinate system;
the first space transformation unit is used for carrying out space transformation operation on the second point cloud map according to the first space transformation matrix to obtain a first point cloud map;
and the second space transformation unit is used for carrying out space transformation operation on the second geometric grid map according to the second space transformation matrix to obtain the first geometric grid map.
In one possible implementation, the second map building module includes:
the editing space determining submodule is used for determining a virtual content editing space under local coordinates according to the first geometric grid map;
and the content editing submodule is used for performing virtual content editing operation in the virtual content editing space to obtain a virtual scene map, wherein the virtual scene map comprises virtual content to be displayed in the live-action image of the display interface when the second electronic equipment is at the second geographic position.
In one possible implementation manner, the third determining module includes:
the first determining submodule is used for determining a third spatial transformation matrix between the local coordinate system and the geodetic coordinate system;
and the second determining submodule is used for determining a second mapping relation according to the third spatial transformation matrix.
In one possible implementation, the first determining sub-module includes:
the second determining unit is used for determining first coordinate information of at least three first position points in the target area under a geodetic coordinate system;
the third determining unit is used for determining second coordinate information of at least three first position points in the local coordinate system;
and the fourth determining unit is used for determining a third spatial transformation matrix according to the first coordinate information of the at least three first position points in the geodetic coordinate system and the second coordinate information of the at least three first position points in the local coordinate system.
In a possible implementation manner, the first determining unit is specifically configured to:
determining third coordinate information of at least three second position points in the target area in the second point cloud map, and determining fourth coordinate information of at least three second position points in the second geometric grid map;
determining fifth coordinate information of at least three second position points in a local coordinate system;
determining a first spatial transformation matrix according to third coordinate information of at least three second position points in a second point cloud map and fifth coordinate information of at least three second position points in a local coordinate system;
and determining a second space transformation matrix according to the fourth coordinate information of the at least three second position points in the second geometric grid map and the fifth coordinate information of the at least three second position points in the local coordinate system.
Fig. 10 shows a block diagram of a virtual-real fusion exhibition apparatus according to an embodiment of the present disclosure. The virtual-real fusion display device is applied to second electronic equipment. As shown in fig. 10, theapparatus 100 includes:
the sendingmodule 101 is configured to send a visual positioning request to a first electronic device, where the visual positioning request includes an environment image where a second electronic device is located and a first geographic location of the second electronic device;
the receivingmodule 102 is configured to receive a visual positioning result and virtual content to be displayed, where the visual positioning result is determined by the first electronic device according to the environment image and the first geographic position, and the visual positioning result includes second geographic position and posture information of the second electronic device;
and thedisplay module 103 is configured to display the virtual content in the live-action image of the display interface of the second electronic device according to the second geographic position and the posture information.
In one possible implementation, the virtual content is a public information display facility of a target area where the second geographic location is located;
thedisplay module 103 is further configured to display, in the live-action image of the display interface, the display content corresponding to the public information display facility when the virtual content is triggered.
In a possible implementation manner, the sendingmodule 101 is further configured to send a location request for the public information presentation facility to the first electronic device in a case where the virtual content is triggered;
theapparatus 100 further comprises:
the first determining module is used for determining a third geographic position of the public information display facility under the condition of receiving the positioning result sent by the first electronic equipment;
and the second determining module is used for determining a navigation path from the second geographical position to the third geographical position of the second electronic equipment according to the second geographical position and the third geographical position.
In a possible implementation manner, thedisplay module 103 is further configured to display the augmented reality AR navigation path in the live-action image of the display interface according to the navigation path.
In a possible implementation manner, thepresentation module 103 is further configured to display recommendation information corresponding to the virtual content when the virtual content is triggered, where the recommendation information includes: at least one of building information, merchant information, service guide, and marketing content.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The embodiments of the present disclosure also provide a computer program product, which includes computer readable codes, and when the computer readable codes are run on a device, a processor in the device executes instructions for implementing the virtual-real fusion display method provided in any of the above embodiments.
The embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, where the instructions, when executed, cause a computer to perform the operations of the virtual-real fusion display method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
FIG. 11 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure. As shown in fig. 11, theelectronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 11,electronic device 800 may include one or more of the following components: processingcomponent 802,memory 804,power component 806,multimedia component 808,audio component 810, input/output (I/O)interface 812,sensor component 814, andcommunication component 816.
Theprocessing component 802 generally controls overall operation of theelectronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Theprocessing components 802 may include one ormore processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, theprocessing component 802 can include one or more modules that facilitate interaction between theprocessing component 802 and other components. For example, theprocessing component 802 can include a multimedia module to facilitate interaction between themultimedia component 808 and theprocessing component 802.
Thememory 804 is configured to store various types of data to support operations at theelectronic device 800. Examples of such data include instructions for any application or method operating on theelectronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. Thememory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Thepower supply component 806 provides power to the various components of theelectronic device 800. Thepower components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for theelectronic device 800.
Themultimedia component 808 includes a screen that provides an output interface between theelectronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, themultimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when theelectronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Theaudio component 810 is configured to output and/or input audio signals. For example, theaudio component 810 includes a Microphone (MIC) configured to receive external audio signals when theelectronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in thememory 804 or transmitted via thecommunication component 816. In some embodiments,audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between theprocessing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Thesensor assembly 814 includes one or more sensors for providing various aspects of state assessment for theelectronic device 800. For example, thesensor assembly 814 may detect an open/closed state of theelectronic device 800, the relative positioning of components, such as a display and keypad of theelectronic device 800, thesensor assembly 814 may also detect a change in position of theelectronic device 800 or one of the components of theelectronic device 800, the presence or absence of user contact with theelectronic device 800, orientation or acceleration/deceleration of theelectronic device 800, and a change in temperature of theelectronic device 800.Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. Thesensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, thesensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
Thecommunication component 816 is configured to facilitate wired or wireless communication between theelectronic device 800 and other devices. Theelectronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, thecommunication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, thecommunication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, theelectronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as thememory 804, is also provided that includes computer program instructions executable by theprocessor 820 of theelectronic device 800 to perform the above-described methods.
FIG. 12 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure. As shown in fig. 12, theelectronic device 1900 may be provided as a server. Referring to fig. 12,electronic device 1900 includes aprocessing component 1922 further including one or more processors and memory resources, represented bymemory 1932, for storing instructions, e.g., applications, executable byprocessing component 1922. The application programs stored inmemory 1932 may include one or more modules that each correspond to a set of instructions. Further, theprocessing component 1922 is configured to execute instructions to perform the above-described method.
Theelectronic device 1900 may also include apower component 1926 configured to perform power management of theelectronic device 1900, a wired orwireless network interface 1950 configured to connect theelectronic device 1900 to a network, and an input/output (I/O)interface 1958. Theelectronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Unix-like operations of free and open source codeActing system (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as thememory 1932, is also provided that includes computer program instructions executable by theprocessing component 1922 of theelectronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be implemented as a combination of instructions. They are executed in parallel, or they may sometimes be executed in the reverse order, depending on the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.