Disclosure of Invention
In view of the above, an object of the present invention is to provide a navigation method, a navigation apparatus, a navigation system, and a storage medium, which can display navigation data corresponding to a landscape viewed by a user to the user, avoid the situation that the navigation data is inconsistent with the content actually viewed by the user, and improve the user experience.
The embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a navigation method, which is applied to a navigation device, where the method includes: determining the identification of the space to be navigated where the user is currently located; inquiring a pre-established database according to the identification to obtain navigation data corresponding to the space to be navigated, wherein the navigation data comprises a panoramic view file, a plurality of explanation units, a first corresponding relation between each preset visual angle included by the panoramic view file and the real orientation, and a second corresponding relation between each explanation unit and each preset visual angle included by the panoramic view file; acquiring current azimuth information of the mobile terminal; determining a current preset visual angle corresponding to the current orientation information and a unit to be explained corresponding to the current preset visual angle in the panoramic view file according to the first corresponding relation and the second corresponding relation of the current orientation information; and displaying the current preset visual angle and the unit to be explained. The current preset visual angle image of the panoramic view displayed by the navigation equipment corresponds to the content of the wall surface to which the user faces at the moment, and the content of the current unit to be explained is also related to the content of the wall surface to which the user faces at the moment, namely, the current preset visual angle panoramic image displayed by the navigation equipment and the unit to be explained change along with the change of the orientation of the user, so that for the user, the navigation content does not need to be positioned according to the orientation words appearing in the navigation data, the situation that the navigation data is inconsistent with the content actually seen by the user can not be sensed, and the user experience can be improved.
With reference to the embodiment of the first aspect, in a possible implementation manner, a signal transmitter is disposed in each space to be navigated, and a signal transmitted by the signal transmitter includes a tag of the space to be navigated where the signal transmitter is located; the determining the identifier of the space to be navigated where the user is currently located includes: acquiring a signal transmitted by the signal transmitter; and determining the label included in the signal with the strongest signal strength in the acquired signals as the identifier.
With reference to the first aspect, in a possible implementation manner, the number of preset views included in the panoramic view file is the same as the number of walls of a space to be navigated corresponding to the panoramic view file, a panoramic image corresponding to each preset view corresponds to a plurality of interpretation units, each interpretation unit is labeled by a polygon, an anchor link is created, and the anchor link is clicked to pop up a detailed content view included in the corresponding interpretation unit.
With reference to the first aspect, in a possible implementation manner, the explaining unit includes a text file, an audio file, and a video file, and the explaining unit corresponds to an area of a certain wall surface, where the area is a whole wall surface or a local area on a current wall surface, and the displaying the current preset view angle and the unit to be explained includes: when an instruction which is triggered by a user and used for representing a navigation mode is obtained, a first image layer and a second image layer are formed, wherein the first image layer is positioned below the second image layer, and the second image layer is in a hidden state by default; displaying the panoramic image corresponding to the current preset visual angle on the first layer, and displaying the image-text file, the audio file and the video file which are included by the unit to be explained on the second layer; marking the unit to be explained corresponding to the current preset visual angle on the first image layer in a polygonal mode, and creating an anchor point link; and responding to an anchor point link click instruction to pop up the second image layer.
With reference to the embodiment of the first aspect, in a possible implementation manner, the displaying the current preset view angle and the unit to be explained includes:
when the instruction which is triggered by the user and used for representing the navigation mode is an active navigation mode, actively and sequentially playing audio files contained in the unit to be explained corresponding to the current preset visual angle, and highlighting the polygon label of the explanation unit corresponding to the audio file which is currently played on the first image layer; and when the instruction which is triggered by the user and used for representing the navigation mode is acquired to be the passive navigation mode, responding to an anchor point link click instruction triggered by the user, and displaying the image-text file, the audio file and the video file which are contained in the unit to be explained and correspond to the anchor point link click instruction.
In a second aspect, an embodiment of the present application provides a navigation device, including: applied to a navigation device, the apparatus comprising: the determining module is used for determining the identifier of the space to be navigated where the user is currently located; an obtaining module, configured to query a pre-established database according to the identifier, and obtain navigation data corresponding to the space to be navigated, where the navigation data includes a panoramic view file, multiple interpretation units, a first correspondence between each preset view included in the panoramic view file and the real orientation, and a second correspondence between each interpretation unit and each preset view included in the panoramic view file; the acquisition module is also used for acquiring the current azimuth information of the acquisition module; the determining module is further configured to determine a current preset viewing angle corresponding to the current orientation information and a to-be-explained unit corresponding to the current preset viewing angle in the panoramic view file according to the first corresponding relationship and the second corresponding relationship of the current orientation information; and the display module is used for displaying the current preset visual angle and the unit to be explained.
With reference to the second aspect, in a possible implementation manner, a signal transmitter is disposed in each space to be navigated, and a signal transmitted by the signal transmitter includes a tag of the space to be navigated where the signal transmitter is located; the determining module is used for acquiring the signal transmitted by the signal transmitter; and determining the label included in the signal with the strongest signal strength in the acquired signals as the identifier.
With reference to the second aspect, in a possible implementation manner, the number of preset views included in the panoramic view file is the same as the number of walls of the space to be navigated corresponding to the panoramic view file, the panoramic image corresponding to each preset view corresponds to a plurality of interpretation units, each interpretation unit is labeled by a polygon, an anchor link is created, and a detailed content view included in the corresponding interpretation unit can be popped up by clicking the anchor link.
With reference to the second aspect, in a possible implementation manner, the explaining unit includes a text file, an audio file, and a video file, and the explaining unit corresponds to an area of a certain wall surface, where the area is a whole wall surface or a local area on a current wall surface, and the displaying the current preset view angle and the unit to be explained includes: when an instruction which is triggered by a user and used for representing a navigation mode is obtained, a first image layer and a second image layer are formed, wherein the first image layer is positioned below the second image layer, and the second image layer is in a hidden state by default; displaying the panoramic image corresponding to the current preset visual angle on the first layer, and displaying the image-text file, the audio file and the video file which are included by the unit to be explained on the second layer; marking the unit to be explained corresponding to the current preset visual angle on the first image layer in a polygonal mode, and creating an anchor point link; and responding to an anchor point link click instruction to pop up the second image layer.
With reference to the second aspect, in a possible implementation manner, the navigation mode includes an active navigation mode and a passive navigation mode, and the displaying the current preset viewing angle and the unit to be explained includes: when the active navigation mode is acquired, actively and sequentially playing audio files contained in the unit to be explained corresponding to the current preset visual angle, and highlighting the polygon label of the explaining unit corresponding to the currently played audio file on the first layer; and when the passive navigation mode is acquired, responding to an anchor point link click instruction triggered by a user, and displaying the image-text file, the audio file and the video file which are contained in the unit to be explained and correspond to the anchor point link click instruction.
In a third aspect, an embodiment of the present application further provides a navigation apparatus, including: the device comprises a memory, a processor, a positioning component and a transceiver which are connected with each other; the memory is used for storing programs; the processor calls a program stored in the memory to perform the method of the first aspect embodiment and/or any possible implementation manner of the first aspect embodiment.
In a fourth aspect, the present application further provides a non-transitory computer-readable storage medium (hereinafter, storage medium), on which a computer program is stored, where the computer program is executed by a computer to perform the method in the foregoing first aspect and/or any possible implementation manner of the first aspect.
In a fifth aspect, an embodiment of the present application provides a navigation system, including a navigation device and a signal transmitter; the signal transmitter is used for transmitting a signal; the navigation equipment is used for inquiring a pre-established database according to the identification to acquire navigation data corresponding to the space to be navigated, wherein the navigation data comprises a panoramic view file, a plurality of explanation units, a first corresponding relation between each preset visual angle included by the panoramic view file and the real position, and a second corresponding relation between each explanation unit and each preset visual angle included by the panoramic view file; acquiring current azimuth information of the mobile terminal; determining a current preset visual angle corresponding to the current orientation information and a unit to be explained corresponding to the current preset visual angle in the panoramic view file according to the first corresponding relation and the second corresponding relation of the current orientation information; and displaying the current preset visual angle and the unit to be explained.
In combination with the fifth aspect of the embodiments, in one possible implementation, a plurality of signal transmitters are disposed in each navigation space, and the plurality of signal transmitters disposed in the same navigation space are located at different physical positions of the same horizontal plane.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and drawings.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, relational terms such as "first," "second," and the like may be used solely in the description herein to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Further, the term "and/or" in the present application is only one kind of association relationship describing the associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone.
First, an application scenario implemented by the present application is introduced.
For a tourist attraction of building type, a plurality of independent building spaces are often included, for example, the palace includes a plurality of independent palace in the palace, each palace includes a plurality of independent rooms; the mojock comprises a plurality of independent caverns. Each room or cave in the above example is an independent building space. Each building space includes a plurality of walls. For convenience of explanation, in the embodiment of the present application, it is assumed that each building space includes six wall surfaces, i.e., an east wall, a south wall, a west wall, a north wall, a top wall, and a bottom surface. Of course, the above examples are merely for reference, and it can be understood that the number of the wall surfaces of the special-shaped building space can be more or less.
For each wall of the building space, the corresponding landscape may be different, for example, for each cave of the moho cave, different wall paintings are drawn on each wall thereof, for example, for the individual rooms included in the palace, the finishing engraving on each wall thereof is different.
The contents displayed on the wall surfaces have rich values of history, art, humanity and the like, and are used for reflecting specific building subjects or consciousness forms, so that professional personnel or equipment are generally required to guide visitors when visiting the landscape. However, since each building space includes a plurality of walls, when navigating, it is necessary to define the building space by using a large number of directional terms, and further description is performed, so for tourists who are not familiar with the contents of the walls, it is difficult to quickly find corresponding contents, the interpretation precision is low, and the actual use effect is poor.
In order to solve the foregoing problems, embodiments of the present application provide a navigation method, a navigation apparatus, a navigation system, and a storage medium, and the navigation technique can be implemented by using corresponding software, hardware, and a combination of software and hardware. The following describes the embodiments of the present application in detail, taking the wall painting of the guide grotto as an example. In the method, a grotto site is generally composed of a plurality of relatively independent grottos, and a non-separable grotto in each grotto is an independent building space mentioned in the embodiment of the application.
Referring to fig. 1, the present embodiment provides anavigation system 10, which includes anavigation device 100 and a signal transmitter 200.
The signal transmitter 200 needs to meet the requirements of high precision, low power consumption, convenient deployment, no influence on cultural relics and the like, and meanwhile, the signal transmitted by the signal transmitter 200 is broadcast data, and the signal at least comprises the equipment identification of the signal transmitter 200 transmitting the signal.
The signal transmitters 200 are disposed in the building spaces, so that one signal transmitter 200 exists in each building space, and the building space identifier and the device identifier of the signal transmitter 200 are associated with each other.
As an alternative implementation, the device identification may include a device unique number, a group code, and a device code. The unique device number is the factory device number of the signal transmitter 200, and the block code and the device code can be configured by user. In one embodiment of the present application, the block code of the signal transmitter 200 may be used as the identification of the building space to which it belongs. Wherein the identification of the building space is a tag.
Thenavigation device 100 is used to receive the signal transmitted by the signal transmitter 200.
Since the signal transmitted by each signal transmitter 200 is broadcast data, it can be understood that, in theory, thenavigation device 100 can acquire signals transmitted by a plurality of signal transmitters 200 within the signal reception range.
Furthermore, since the closer the distance, the less the signal attenuation, the signal strength of the signal received by thenavigation device 100 can be used as a measure of the distance between thenavigation device 100 and the signal transmitter 200 that transmitted the signal. Therefore, when thenavigation device 100 receives a plurality of signals, the building space corresponding to the signal with the strongest signal strength may be determined as the building space closest to itself (i.e., the building space in which itself is currently located). Since the signal includes the device identifier of the signal transmitter 200 and the device identifier has a corresponding relationship with the independent space, thenavigation device 100 can determine the identifier of the building space where the navigation device is currently located according to the device identifier included in the signal with the strongest signal strength, that is, determine the identifier of the current space to be navigated.
Furthermore, if the two building spaces are located too close to each other or if the signal strengths of the signals broadcast by the signal transmitters 200 received by the users from the different building spaces are similar due to the spatial structure relationship, thenavigation device 100 may not be able to accurately determine the identification of the space to be navigated in which the visitor is currently located.
To alleviate the above problem, as an alternative embodiment, at least two signal transmitters 200 may be deployed at different physical locations of the same horizontal plane covered by the building space or all the building spaces in the above problem (for example, one signal transmitter 200 is deployed on the left-side ground, and the other signal transmitter 200 should also be deployed on the ground, for example, the right-side ground, but should not be deployed at other high-altitude locations), and the signal transmitters 200 belonging to the same building space are configured with the same block coding and different device codes, so as to ensure that the signals broadcast by all the signal transmitters 200 in the same building space can cover the whole building space as much as possible. When thesubsequent navigation device 100 receives a plurality of signals, the identification of the building space corresponding to the signal with the strongest signal strength is still used as the identification of the space to be navigated.
Of course, as an alternative embodiment, the tourist may also manually input the identifier of the building space where the tourist is currently located into thenavigation apparatus 100, so that thenavigation apparatus 100 acquires the identifier of the space to be navigated.
After acquiring the identifier of the space to be navigated, thenavigation device 100 queries a pre-established database, reads navigation data corresponding to the identifier of the space to be navigated from the database, and starts navigating the user, where the specific navigation process please refer to the following method embodiment.
In addition, referring to fig. 2, an embodiment of the present application provides anavigation apparatus 100, which can provide a navigation service for tourists.
Alternatively, thenavigation Device 100 may be, but is not limited to, a smart phone, a tablet computer, a Mobile Internet Device (MID), a personal digital assistant, and the like.
Among them, thenavigation apparatus 100 may include: processor 110, memory 120, display 130,positioning component 140, transceiver 150, and speaker 160.
It should be noted that the components and structure of thenavigation device 100 shown in fig. 2 are exemplary only and not limiting, and that thenavigation device 100 may have other components and structures as desired.
The processor 110, memory 120, display 130, positioning means 140, transceiver 150, and speaker 160, as well as other components that may be present in thenavigation device 100, are electrically connected to each other, directly or indirectly, to enable the transmission or interaction of data. For example, the processor 110, the memory 120, the display 130, thepositioning component 140, the transceiver 150, the speaker 160, and other components that may be present may be electrically connected to each other via one or more communication buses or signal lines.
The memory 120 is used for storing a program, for example, a program corresponding to a navigation method appearing later or a navigation apparatus appearing later. Optionally, when the memory 120 stores therein the navigation device, the navigation device includes at least one software function module which can be stored in the memory 120 in the form of software or firmware (firmware).
Alternatively, the software function module included in the navigation apparatus may be solidified in an Operating System (OS) of thenavigation device 100.
The processor 110 is adapted to execute executable modules stored in the memory 120, such as software functional modules or computer programs comprised by the navigation device. When the processor 110 receives the execution instruction, it may execute the computer program, for example, to perform: determining the identification of the space to be navigated where the user is currently located; inquiring a pre-established database according to the identification to obtain navigation data corresponding to the space to be navigated, wherein the navigation data comprises a panoramic view file, a plurality of explanation units, a first corresponding relation between each preset visual angle included by the panoramic view file and the real orientation, and a second corresponding relation between each explanation unit and each preset visual angle included by the panoramic view file; acquiring current azimuth information of the mobile terminal; determining a current preset visual angle corresponding to the current orientation information and a unit to be explained corresponding to the current preset visual angle in the panoramic view file according to the first corresponding relation and the second corresponding relation of the current orientation information; and displaying the current preset visual angle and the unit to be explained.
The display screen 130 is used for displaying the panorama view information and the graphic and text information related to the mural contents to the visitor when the navigation data includes the panorama view file and the graphic and text file.
Thepositioning component 140 is used to determine current orientation information of thenavigation device 100. Thepositioning component 140 may include, but is not limited to, a geomagnetic sensor and a gyroscope, among others.
The transceiver 150 is used for transmitting and receiving signals and commands.
The speaker 160 is used to present a voice introduction associated with the mural to the visitor when the navigation data includes voice information.
Of course, the method disclosed in any of the embodiments of the present application can be applied to the processor 110, or implemented by the processor 110.
The following description will be made for the navigation method provided in the present application.
Referring to fig. 3, the present embodiment provides a navigation method applied to thenavigation apparatus 100, which includes the following steps.
Step S110: and determining the identification of the space to be navigated where the user is currently located.
As an alternative embodiment, the identification of the space to be navigated determined by thenavigation device 100 may be provided directly by the user, for example, by direct input from the user, or by the user scanning a two-dimensional code of the space to be navigated currently located. However, in practical use, this method requires the user to scan the two-dimensional code frequently or submit the position number, and the interaction experience is not good enough for the user.
Thus, as another alternative, the signal transmitter 200 may be provided in each space to be navigated as described above, and the signal transmitted by the signal transmitter 200 includes the tag of the space to be navigated in which it is located.
After acquiring the signals transmitted by the plurality of different signal transmitters 200, thenavigation device 100 determines the tag included in the signal with the strongest signal strength in the acquired signals as the identifier of the space to be navigated. The space to be navigated is a building space where the user is currently located, and thenavigation device 100 is required to navigate the landscape of the building space.
Step S120: and inquiring a pre-established database according to the identification to obtain navigation data corresponding to the space to be navigated, wherein the navigation data comprises a panoramic view file, a plurality of explanation units, a first corresponding relation between each preset visual angle included by the panoramic view file and the real orientation, and a second corresponding relation between each explanation unit and each preset visual angle included by the panoramic view file.
Wherein the database is pre-established by the staff.
The database includes navigation data corresponding to the identification of each building space. Aiming at each navigation data, a panoramic view file of the building space corresponding to the navigation data and a plurality of explaining units corresponding to the wall surface of the building space corresponding to the panoramic view file are included, a first corresponding relation between each preset visual angle and the real orientation included in the panoramic view file is included, and a second corresponding relation between each explaining unit and each preset visual angle included in the panoramic view file is included.
The process of creating the database will be described below by taking one of the building spaces as an example.
For a certain building space a, the worker firstly determines the identifier of the building space a according to the block code of the signal transmitter 200 deployed in the building space a, and then determines the wall number of the building space a. The term "six-sided" is used herein to refer to the east, south, west, north, top and bottom surfaces, respectively.
Then, the worker photographs images of respective preset viewing angles in the building space a to compose a panoramic view file. The panoramic view file comprises a plurality of preset views, wherein the number of the preset views is the same as the number of the wall surfaces of the building space a, for example, the building space a is a six-sided wall surface, correspondingly, the panoramic view file corresponding to the building space a comprises six preset views, and the panoramic image displayed by each preset view is related to the content of the corresponding wall surface.
When a panoramic view file is established, firstly, the initial position of a shot panoramic image is determined to be 0 degree of the coordinates of the panoramic view file, then an initial preset visual angle is determined from the panoramic image, the preset visual angle is the preset visual angle which is firstly seen when a user opens the panoramic view file, and the horizontal rotation angle α of the initial preset visual angle relative to the initial position in the panoramic view file is calculated, then an angle deviation β of the wall surface corresponding to the initial preset visual angle relative to the geomagnetic north pole is collected by a worker in a mode that the worker holds thepositioning part 140 by hand and faces the wall surface corresponding to the initial preset visual angle, so that the first north pole relation of each preset visual angle and the orientation reality included in the panoramic view file can be obtained, wherein gamma is β - α, and the first north pole relation psi is phi-gamma, and the panoramic view file is used for representing that the horizontal rotation angle deviation phi of the viewing position of the user in the panoramic view file is phi.
To this end, for the building space a, a panorama view file and a panorama corresponding to the identifier thereof may be formed, and the navigation data includes a plurality of interpretation units and first correspondence between each preset view angle included in the panorama view file and the real orientation.
Subsequently, the staff establishes an explanation unit corresponding to each wall surface included in the building space a, with each wall surface as a unit. Because the wall surface corresponds to the preset visual angle, the explaining unit also corresponds to the preset visual angle, and a second corresponding relation is formed.
The method comprises the following steps that a preset visual angle can correspond to an explanation unit, and at the moment, the content displayed by the explanation unit corresponds to the whole area included by a wall surface corresponding to the preset visual angle; in addition, a preset visual angle may also correspond to a plurality of interpretation units, at this time, the content displayed by each interpretation unit corresponds to a local area of the wall surface corresponding to the preset visual angle, and the content displayed by all the interpretation units corresponding to the preset visual angle is combined to correspond to the whole area of the wall surface corresponding to the preset visual angle.
For each explaining unit, the displayed content comprises rich media such as image files, audio files, video files, text files and the like or links of the rich media, and is used for introducing the content of the wall painting displayed on the wall corresponding to the corresponding preset visual angle.
In addition, each explanation unit is marked through a polygon, an anchor link is created, and a detailed content view included in the corresponding explanation unit can be popped up by clicking the anchor link.
After the database is established, the database may be stored in thenavigation device 100 or in the cloud, so that thenavigation device 100 may query the corresponding navigation data according to the identifier of the space to be navigated.
Of course, it can be understood that if the database is stored in the cloud, thenavigation device 100 needs to perform data interaction with the cloud server to obtain the relevant content in the database.
Step S130: and acquiring the current orientation information of the mobile terminal.
It has been mentioned above that the positioning means 140 are provided within thenavigation device 100, and the current orientation information of thenavigation device 100 can be detected in real time.
As an alternative embodiment, thepositioning component 140 may be a geomagnetic sensor and a gyroscope. Here, thenavigation apparatus 100 acquires an angular deviation between itself and the magnetic north pole through the geomagnetic sensor, and determines the angular deviation as current bearing information of itself.
Step S140: and determining a current preset visual angle corresponding to the current orientation information and a unit to be explained corresponding to the current preset visual angle in the panoramic view file according to the current orientation information, the first corresponding relation and the second corresponding relation.
In the embodiment of the application, the number of the explaining units is six, and the panoramic view file comprises six preset viewing angles.
After determining the orientation information of thenavigation device 100, the navigation device queries the acquired navigation data, and compares the current orientation information of the navigation device with the first corresponding relationship, thereby determining a current preset viewing angle corresponding to the current orientation information of the navigation device and a unit to be explained corresponding to the current preset viewing angle.
As an optional implementation manner, in the first corresponding relationship and the second corresponding relationship, when the current position information is located in a certain preset range, the current position information corresponds to a preset viewing angle corresponding to the preset range and a plurality of interpretation units corresponding to the preset viewing angle. That is, when thenavigation device 100 detects that the current position of the user is within a certain preset range, the current position corresponds to the same preset viewing angle and the unit to be explained corresponding to the preset viewing angle. Assuming that the current position of the user detected by thenavigation device 100 is a, the current position of the user at the next time changes and becomes B, but both a and B belong to the preset range corresponding to the preset viewing angle a, and at this time, the current time determined by thenavigation device 100, the current preset viewing angle determined at the next time, and the unit to be explained corresponding to the current preset viewing angle do not change.
Step S150: and displaying the unit to be explained and the current preset visual angle.
Subsequently, thenavigation apparatus 100 displays the determined current preset viewing angle and the unit to be explained corresponding to the current preset viewing angle.
When displaying, thenavigation apparatus 100 forms a first layer and a second layer. The first layer is located below the second layer, and the second layer is in a hidden state by default. At this time, thenavigation device 100 displays the determined panoramic image corresponding to the current preset view angle on the first layer, displays the determined image-text file, audio file and video file included in the unit to be explained on the second layer, marks the unit to be explained on the first layer in a polygonal form, and creates an anchor link. When the user clicks the anchor point, thenavigation device 100 responds to the anchor point link click command to pop up the second layer.
In addition, in the embodiment of the present application, the user may select the navigation mode, wherein the navigation mode includes an active navigation mode and a passive navigation mode.
When thenavigation device 100 acquires the active navigation mode triggered by the user, the audio files contained in the unit to be explained corresponding to the current preset viewing angle are actively and sequentially played. In addition, thenavigation apparatus 100 may also highlight the polygon label of the explanation unit corresponding to the currently playing audio file on the first layer, for example, thicken the polygon corresponding to the currently playing unit to be explained, change the polygon frame corresponding to the currently playing unit to be explained into different colors, and make the polygon frame corresponding to the currently playing unit to be explained flash, until thenavigation apparatus 100 determines that the current preset view angle has been switched to another preset view angle.
When thenavigation device 100 acquires the passive navigation mode triggered by the user, thenavigation device 100 displays a second layer in response to the anchor point link click command triggered by the user, and displays a picture-text file, an audio file and a video file contained in the unit to be explained, which correspond to the anchor point link click command triggered by the user, on the second layer.
It should be noted that, in the active mode and the passive mode, the user can independently click any anchor link of the interpretation unit at the current preset viewing angle, so that thenavigation device 100 pops up the second layer, and the user can conveniently view the contents of the image-text file, the audio file, the video file, and the like included in the current interpretation unit.
The current preset viewing angle of the panoramic view file displayed by thenavigation apparatus 100 and the content of the unit to be explained corresponding to the current preset viewing angle are related to the content of the wall surface to which the user faces at the moment, that is, the current preset viewing angle displayed by thenavigation apparatus 100 and the content of the unit to be explained corresponding to the current preset viewing angle change along with the change of the orientation of the user, so that the content of the unit to be explained does not need to set up an orientation word any more, for the user, the content referred to by the navigation content does not need to be derived according to the orientation word any more, the situation that the navigation data is inconsistent with the content actually seen by the user is not experienced, and the user experience can be improved.
As shown in fig. 4, an embodiment of the present application further provides anavigation apparatus 400, where thenavigation apparatus 400 may include:
a determiningmodule 410, configured to determine an identifier of a space to be navigated where a user is currently located;
an obtainingmodule 420, configured to query a pre-established database according to the identifier, and obtain navigation data corresponding to the space to be navigated, where the navigation data includes a panoramic view file, multiple explanation units, a first corresponding relationship between each preset view included in the panoramic view file and the real orientation, and a second corresponding relationship between each preset view included in the panoramic view file and each explanation unit;
the obtainingmodule 420 is further configured to obtain current position information of the mobile terminal;
the determiningmodule 410 is further configured to determine a current preset viewing angle corresponding to the current orientation information and a unit to be explained corresponding to the current preset viewing angle in the panoramic view file according to the first corresponding relationship and the second corresponding relationship of the current orientation information;
adisplay module 430, configured to display the current preset view angle and the unit to be explained.
In a possible implementation manner, a signal transmitter is arranged in each space to be navigated, and the signal transmitted by the signal transmitter comprises a tag of the space to be navigated in which the signal transmitter is located; the determiningmodule 410 is configured to obtain a signal transmitted by the signal transmitter; and determining the label included in the signal with the strongest signal strength in the acquired signals as the identifier.
In a possible implementation manner, the number of the preset viewing angles included in the panoramic view file is the same as the number of the walls of the space to be navigated corresponding to the panoramic view file, the panoramic image corresponding to each preset viewing angle corresponds to a plurality of interpretation units, each interpretation unit is labeled by a polygon, an anchor link is created, and a detailed content view included in the corresponding interpretation unit can be popped up by clicking the anchor link.
In a possible implementation manner, the explanation unit includes a text file, an audio file, and a video file, and the explanation unit corresponds to an area of a certain wall surface, where the area is a whole wall surface or a local area on a current wall surface, and thepresentation module 430 is configured to form a first layer and a second layer when an instruction for representing a navigation mode triggered by a user is obtained, where the first layer is located below the second layer, and the second layer is in a hidden state by default; displaying the panoramic image corresponding to the current preset visual angle on the first layer, and displaying the image-text file, the audio file and the video file which are included by the unit to be explained on the second layer; marking the unit to be explained corresponding to the current preset visual angle on the first image layer in a polygonal mode, and creating an anchor point link; and responding to an anchor point link click instruction to pop up the second image layer.
In a possible implementation manner, the navigation mode includes an active navigation mode and a passive navigation mode, and thedisplay module 430 is configured to actively and sequentially play audio files contained in the unit to be explained corresponding to the current preset view angle when the active navigation mode is acquired, and highlight a polygon label of the explanation unit corresponding to the audio file currently being played on the first layer; and when the passive navigation mode is acquired, responding to an anchor point link click instruction triggered by a user, and displaying the image-text file, the audio file and the video file which are contained in the unit to be explained and correspond to the anchor point link click instruction.
Thenavigation device 400 provided in the embodiment of the present application has the same implementation principle and the same technical effect as those of the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments for the parts of the embodiment of the device that are not mentioned.
In addition, the embodiment of the present application further provides a storage medium, where a computer program is stored on the storage medium, and when the computer program is executed by a computer, the steps included in the navigation method described above are executed.
To sum up, the navigation method, apparatus, navigation device, navigation system and storage medium provided in the embodiments of the present invention include: determining the identification of the space to be navigated where the user is currently located; inquiring a pre-established database according to the identification to obtain navigation data corresponding to the space to be navigated, wherein the navigation data comprises a panoramic view file, a plurality of explanation units, a first corresponding relation between each preset visual angle included by the panoramic view file and the real orientation, and a second corresponding relation between each explanation unit and each preset visual angle included by the panoramic view file; acquiring current azimuth information of the mobile terminal; determining a current preset visual angle corresponding to the current orientation information and a unit to be explained corresponding to the current preset visual angle in the panoramic view file according to the first corresponding relation and the second corresponding relation of the current orientation information; and displaying the current preset visual angle and the unit to be explained. The content of the explanation unit displayed by the navigation equipment is related to the content of the wall surface to which the user faces at the moment, and the panoramic image displayed by the current preset visual angle also corresponds to the content of the wall surface to which the user faces at the moment, namely, the content of the explanation unit displayed by the navigation equipment and the panoramic image displayed by the current preset visual angle change along with the change of the orientation of the user, so that for the user, the navigation content does not need to be positioned according to the orientation words appearing in the navigation data, the situation that the navigation data is inconsistent with the content actually seen by the user can not be sensed, and the user experience can be improved.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions may be stored in a storage medium if they are implemented in the form of software function modules and sold or used as separate products. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a notebook computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.