Interactive live-action navigation methodTechnical Field
The invention relates to an interactive live-action navigation method, which takes a user as an organic component of a navigation system and realizes the timely update of street view/live-action data according to the feedback of the user; and the habit and preference of the user are analyzed through big data, so that the user experience is improved, and the method belongs to the technical field of navigation.
Background
Real (street view) navigation is the most intuitive navigation mode, namely, during the traveling process, the images of surrounding buildings or scenery are used as reference to guide the correct traveling direction; however, there are some problems with this navigation approach:
firstly, the method comprises the following steps: the actual scene marks or marks and the attention points and the sensitive points are different from person to person;
secondly, the method comprises the following steps: the angle and field of view may vary if different vehicles are used or walking;
thirdly, the method comprises the following steps: real scenes can be changed continuously, such as new buildings, facilities along streets, buildings are new as a whole, shops are updated and the like, and timely updating street scene/real scene data is a key point for maintaining navigation effectiveness, but the large-scale timely updating is difficult to achieve in the prior art.
The existing electronic map operator is difficult to establish a real scene database of a road scene and update the real scene database in time, and the indoor scene database is huge in size. Therefore, the real-scene navigation database is more efficiently established and is in good interaction with the user, which becomes a problem to be solved.
Disclosure of Invention
In the prior art, much attention is paid to the services provided by the navigation system to the user, so that the user is placed in a passive and subordinate position. The interactive live-action navigation method of the invention takes a user as an organic component of a navigation system, and realizes the timely update of street view/live-action data according to the feedback of the user; and the habit and the preference of the user are analyzed through the big data, and information points which are closer to the preference of the user in the street view/real view data are preferentially prompted, so that the user experience is improved.
The technical scheme of the invention is as follows: the interactive live-action navigation system at least comprises a live-action navigation data processing system and an intelligent terminal used by a user, wherein the live-action navigation data processing system at least comprises an electronic map and a live-action navigation point basic database corresponding to the position coordinates of the electronic map; the navigation method comprises the following steps:
s101, in the process of traveling, a user acquires characteristic data which is considered by the user to be capable of being used as a reference point along the way through an intelligent terminal; the characteristic data of the reference point is one or more of image, video, voice and symbolic character combination;
the reference points refer to buildings, scenery, marked signs and the like which are fixed in position for a long time, and can be used as the reference points;
s102, the live-action navigation data processing system analyzes and processes the feature data of the reference point, cross-compares the feature data with a live-action navigation point basic database stored in the system, and prompts a user that the reference point is confirmed if the live-action navigation point consistent with the reference point is found; if the real-scene navigation point which is consistent with the reference point cannot be found, storing the characteristic data of the reference point into a navigation point pending database, and prompting a user to continuously search for a new reference point;
s103, the live-action navigation data processing system collects and analyzes the pending navigation point database, when the characteristic data of a certain pending navigation point is collected by a plurality of different users, namely repeatedly appears, the characteristic data of the pending navigation point is checked to be correct, then the characteristic data of the pending navigation point is sorted and stored in the live-action navigation point basic database.
It should be noted that, in step S103, if the coordinate position already has earlier live-action navigation point data, whether the occurrence time between the earlier live-action navigation point and the navigation point to be determined satisfies the precedence relationship is analyzed; if the occurrence time of the two data meets the precedence relationship, replacing an earlier live-action navigation point with the undetermined navigation point, and converting the earlier live-action navigation point data into historical data; if the appearance time of the earlier live-action navigation point and the to-be-navigated point does not meet the precedence order relationship and cross overlap exists, manually processing and analyzing the reason; the probability of this occurrence is low and can be handled separately. Through the process, the vast users can naturally participate in the real-time updating process of the real-scene navigation point basic database, and huge workload caused by continuous data updating of system maintenance personnel is avoided.
Further (including the step S103), the feature data of the reference point is actively collected by a large number of users during the navigation process; by simplifying and integrating the characteristic data, normalized live-action navigation point data can be formed, and the live-action navigation point basic database is supplemented, completed and updated in time, so that the live-action navigation point basic database is more three-dimensional and accurate.
Because the user is not a professional and the purpose of the user is not to collect standard data, the data collected by the user has various defects and is difficult to use directly; however, a large number of users repeatedly acquire data of the same reference point and can well generate complementary effects, for example, images shot by the users at different positions and angles are integrated, normalized data can be obtained through technical means such as image recognition and image splicing, and the supplement and the improvement of the base database of the live-action navigation point are realized at very low cost. For example, for a certain shop with a large area, the combination of a large number of images shot at different positions and angles can show the full appearance of the shop and even combine the images into a complete 3D effect map; it is clearly impractical for system maintenance personnel to do this. Of course, some rewarding measures may be taken to guide the user to more accurately and more specifically collect data at these locations.
It should be noted that identifying a reference point does not necessarily require the complete data of the reference point, but by displaying the complete data of the reference point (e.g. 3D effect map), the identification of the reference point can be greatly improved.
Furthermore, as the user participates in the navigation process by actively finding the reference point, the personalized rule of the reference point concerned by the user can be obtained by analyzing through a big data analysis means after the user behavior data is accumulated for a period of time; then, after the user starts the navigation function and determines the navigation traveling route, the intelligent navigation function is started:
the live-action navigation data processing system can push and display information of reference points which are concerned more by the user and live-action images along the way to the intelligent terminal of the user according to the personalized rule, so that the efficiency of distinguishing a travelling route by the user is improved; in the process, if the user passes through and finds the reference point prompted by the system, the user is confirmed through the intelligent terminal, so that the validity of the reference point is verified in real time; if the user does not find the reference point prompted by the system and considers that the prompted reference point is wrong, the intelligent terminal reports the error, and the system records and verifies the validity of the reference point.
The system can simultaneously push a plurality of reference points, so that a user can conveniently and quickly confirm whether the own traveling route is correct or not.
Furthermore, by performing big data analysis on the navigation data of a large number of users, respective focus points of various types of users can be analyzed and priorities of the navigation points can be sequenced, so that a more accurate live-action navigation prompt is provided for the users. Users of the type described can be subdivided according to parameters such as age, mode of travel (walking, riding, driving) and the like.
It should be noted that the real-scene navigation data processing system is usually disposed at the network server side, and it is certainly not excluded to integrate some or all functions into the intelligent terminal used by the user.
The invention has the beneficial effects that:
1. the invention can more efficiently establish and update the live-action navigation database with low cost and favorably interact with the user, thereby improving the user experience;
2. in the prior art, a technical scheme of automatically comparing all images generated in the advancing process in real time is adopted in the navigation process, so that resources are consumed very much, the necessity of all images is avoided, and the user experience is not good; the invention adds the link of user interaction, and can selectively compare, greatly reduce the operation amount and improve the accuracy;
3. the invention mainly depends on the real scene recognition to finish the navigation process, completely gets rid of the dependence on a high-precision positioning system, and has low cost and more intuition; certainly, for a driving application scene with a higher speed, the positioning system still needs to be combined for use;
4. through the participation of a large number of users and the analysis of big data, the priority of each navigation point can be sequenced, the navigation accuracy is improved, and therefore benign interaction is formed.
Drawings
FIG. 1: the invention discloses a flow chart of an interactive live-action navigation method (autonomous navigation mode);
FIG. 2: the invention discloses a flow chart (intelligent navigation mode) of an interactive live-action navigation method.
Detailed Description
Example 1:
in this embodiment, the detailed description of the process of collecting the feature data of the reference point and the subsequent processing process by the user is focused.
As shown in fig. 1, the interactive live-action navigation method in autonomous navigation mode specifically includes:
firstly, starting autonomous navigation, wherein a user provides reference point information and an image of an initial position, and a system calculates and obtains an initial position coordinate of the user; for example, a user can use the intelligent terminal to shoot a section of wide-angle video or image at 90 degrees left and right of the forward direction; the system extracts effective navigation points from the navigation points and calculates the direction position of the shooting of the user, thereby determining the initial position coordinates.
Secondly, searching reference points along the way by a user and collecting the reference points;
the mode that the user gathers the characteristic data of the reference point along the way through the intelligent terminal is one or more of the following four modes:
the first method is as follows: shooting videos along the way; identifying and extracting effective reference points from the video image by adopting an automatic continuous identification mode; or the identification process is triggered by the user in a manual identification starting mode;
for example, when the user slightly pauses the movement of the lens at the found reference point, a short pause is generated, and the system triggers the recognition process when detecting the pause;
the second method comprises the following steps: capturing images of the reference points, and then automatically identifying the whole images or manually selecting areas in the images by a user for identification and matching so as to extract effective reference points;
the third method comprises the following steps: inputting reference point information by voice, namely directly speaking the characteristic information of the reference point after a user finds the reference point; the system performs voice recognition and matches with information in the live-action navigation point basic database, so as to extract effective reference points;
for example: the user can say that "there is an intersection in front", "there is a small square in front", "there is an XX company on the left", "there is an XX restaurant on the right", and so on, and the features of the reference points are expressed by a simple description;
the method is as follows: by adopting wearable intelligent equipment, identifying the area concerned by the user through the gaze focus, and then snapshotting an image according to the gaze focus position and carrying out identification matching, so as to extract an effective reference point;
the first mode and the second mode are suitable for walking scenes, and the third mode and the fourth mode are suitable for riding or driving scenes, and the identification and matching process is carried out by taking the current position coordinates of the user as the reference.
Thirdly, if the reference point is successfully matched with the navigation point in the live-action navigation point basic database, informing the user that the reference point is effective; analyzing whether new content exists in the data collected by the user, and if so, storing the new content in a pending database; if necessary, the live-action image in the basic database can be called for the user to confirm again;
if the reference point is unsuccessfully matched with the navigation point in the live-action navigation point basic database, informing a user that the reference point has a question, and storing data collected by advocacy into a pending database;
because the basic database cannot be made in the manufacturing process, some reference points can be omitted, and the stored navigation points also have the problems of incomplete real-scene data and scene updating; therefore, after a certain amount of data is accumulated in the pending database, the content in the pending database can be periodically analyzed and sorted, and the basic database is updated and perfected by using the processing result, so that the updating with low cost is realized;
then, judging whether to continue navigation, and if so, searching a next reference point along the way; if not, ending the navigation.
As shown in fig. 2, the interactive live-action navigation method in the intelligent navigation mode specifically includes:
firstly, starting intelligent navigation, and determining a navigation route according to an electronic map; in the process of advancing, the system selectively displays images of navigation points along the way according to the personalized rule analysis result of the user; at the moment, a plurality of navigation points can be selected and provided at one time, so that a user can conveniently search; especially in a large indoor scene, a plurality of navigation points exist in a plurality of coordinate positions, and if all the navigation points are prompted, the navigation points are too messy and have no key points; the information prompted after screening is more effective and convenient to search;
secondly, searching a corresponding target along the way by a user according to the image of the navigation point;
if the navigation point is found, the user confirms that the navigation point is valid through the man-machine interaction interface; for example, by voice or by clicking on an image of a navigation point in the interface; at the moment, the system updates the position coordinates of the user; the reliability priority level of the navigation point can be correspondingly improved;
if not, the user confirms that the navigation point is invalid through the user interface and can request the system to send a new navigation point; at the moment, the system records the process and reduces the reliability priority level of the navigation point, or the validity of the key point of the navigation point is checked;
then, judging whether to continue navigation, if so, continuing to display the image of the navigation point along the way by the navigation system; if not, ending the navigation.
The invention is not limited to the above embodiments, and those skilled in the art can make equivalent modifications or substitutions without departing from the spirit of the invention, and such equivalent modifications or substitutions are included in the scope defined by the claims of the present application.