Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Visual odometry technology is a core problem of some current scenarios applying computer vision technology. Visual odometry technology can have important application in many fields, such as AR/VR, automatic driving technology, indoor navigation and the like.
On a smart phone platform, the system estimates and maintains the orientation of the mobile phone based on an accelerometer and a gyroscope, estimates the current orientation of the mobile phone through a series of filtering algorithms on the bottom layer, and constructs a software sensor orientation meter for a visual mileage calculation method. Estimating the orientation of the camera again from the original accelerometer and gyroscope in the visual odometry calculation method is redundant, especially at the application layer far from the bottom layer (such as the Web end of a non-native application), the calculation time is multiplied, and therefore directly reading the orientation meter is the best performance scheme. However, most smartphones do not use magnetometers, and maintain orientation based only on accelerometers and gyroscopes, and this algorithm is not observable in the yaw angle dimension, so there is an unobstructed continuous shift in yaw angle in the estimated orientation under continuous motion. This constant offset will cause an increasing error towards the yaw angle part of the odometer, which will eventually have an increasingly negative effect on the tracking accuracy of the odometer.
Referring to fig. 1, a schematic flow chart of a first embodiment of a method for data compensation according to the present invention includes:
step S11: acquiring a first orientation data sequence and a shooting time sequence of an image sequence shot in a preset time; and acquiring a second orientation data sequence of the orientation meter in a preset time.
In this embodiment, the mobile terminal is taken as an example to perform orientation data compensation; the mobile terminal may be a mobile phone, smart glasses, a watch, etc., and the mobile phone is taken as an example for explanation in the following description. The mobile phone is provided with a camera, and continuously shoots in a preset time period through the camera to obtain all shot images in the preset time period, wherein each shot image is correspondingly recorded with shooting time to form a shooting time sequence, and the shot images can be sequenced according to the shooting time to form an image sequence. Furthermore, the mobile phone can acquire corresponding first orientation data through the shot images, wherein the first orientation data represents the image shooting time of the mobile phone and the orientation of a camera in the mobile phone, and the first orientation data can be the shape of a unit quaternion.
The mobile phone is also provided with an orientation meter which is a device integrated in the mobile terminal with a data processing function and can be used for measuring the orientation of a camera in the mobile terminal; the heading meter counts the second heading data of the yaw angle in each preset time period, and each counting records the counting time. The orientation meter can be a device integrated in the mobile terminal with a data processing function and can be used for measuring the orientation of a camera in the mobile terminal; the mobile terminal may be a mobile phone, smart glasses, a watch, and the like, and the following description will be given by taking the mobile phone as an example; and the second orientation data represents a state quantity representing the orientation of the camera in the mobile terminal at the present moment, and may be in the form of a unit quaternion.
Specifically, the mobile phone obtains each second orientation data counted by the orientation meter within a preset time period and the corresponding statistical time of each second orientation data to obtain a second orientation data sequence and a statistical time sequence. For example, the orientation in a cell phone is in 9: 00-9: acquiring and counting orientation data i times in a cycle of 10 to obtain i second orientation data which are respectively recorded as qa0, qa1, qa2 and qa3 … … qai, and acquiring and counting time of the second orientation data are respectively recorded as ta0, ta1, ta2 and ta3 … … tai; accordingly, the second orientation data are sorted according to time sequence, and 9: 00-9: the second heading data series of the heading count in 10 is qa ═ q (qa0, qa1, qa2, qa3 … … qai), and the corresponding statistical time series is ta ═ ta (ta0, ta1, ta2, ta3 … … tai). It should be noted that the above sequences are ordered according to time sequence, and certainly, the sequences may also be ordered according to a reverse time sequence manner in practice, or ordered randomly, but it should be understood that a certain second orientation data in the second orientation data sequence has a corresponding relationship with the statistical time that is the same as the statistical time in the statistical time sequence, for example, the statistical time of the nth second orientation data in the second orientation data sequence is the nth data in the statistical time sequence.
The first orientation data and the second orientation data are unit quaternions, which represent a state quantity indicating the orientation of the camera in the mobile terminal at the present time, and the yaw angle refers to a relative change, for example, a change in yaw angle between time tai and time ta0 is denoted by yaw (qai-qa 0).
Referring to fig. 2, step S11 includes:
step S111: acquiring an image sequence, and acquiring the shooting time of each image to obtain the shooting time sequence.
In a specific embodiment, the mobile phone continuously shoots within a preset time period by using the camera, and then can obtain shot images within the preset time period, each shot image is correspondingly recorded with shooting time to form a shooting time sequence, and the images can be sorted according to time to obtain corresponding image sequences. For example, at 9: 00-9: when images are captured at tv0, tv1, tv2 and tv3 … … tvi times within 10 time periods, respectively, the capturing time sequence of the captured images is tv (tv0, tv1, tv2 and tv3 … … tvi), and the images can be sorted according to the capturing time to form an image sequence.
Step S112: processing each image by using a neural network algorithm to obtain first orientation data corresponding to each image, and obtaining a first orientation data sequence according to each first orientation data.
Specifically, for the captured images, a neural network algorithm may be used to process each captured image to obtain first orientation data corresponding to each image, and then a first orientation data sequence corresponding to the image sequence is obtained according to the first orientation data. Neural network algorithms include, but are not limited to, convolutional neural network algorithms, deep neural network algorithms, and the like. For example, 9: 00-9: the first orientation data sequence corresponding to the captured image within 10 is qv ═ (qv0, qv1, qv2, qv3 … … qvi). It should be noted that, the process of processing the image by using the neural network algorithm may be executed locally by the mobile terminal; certainly, the first orientation data may also be executed by the cloud server, that is, after the mobile terminal obtains the image, the image is sent to the cloud server, the cloud server processes the image by using a neural network algorithm to obtain the corresponding first orientation data, and then the first orientation data is returned to the mobile terminal. Orientation data of an orientation meter in the visual image can be accurately obtained through the neural network algorithm.
Step S12: and calculating the offset speed of the yaw angle of the orientation meter according to the first orientation data sequence, the second orientation data sequence and the shooting time sequence.
In a specific implementation, the yaw angle of the mobile phone can be estimated through data collected by the orientation meter, but the estimation has an error, that is, the yaw angle estimated by the orientation meter (hereinafter, referred to as the "yaw angle of the orientation meter") has an error, and the yaw angle s continuously deviates, that is, the error is continuously accumulated (the yaw angle continuously deviates); in this regard, a speed of deviation of the yaw angle of the heading meter may be defined for characterizing a speed of heading meter error accumulation, wherein a change in yaw angle between time tai and time ta0 may be denoted as yaw (qai-qa 0). In one embodiment, the offset speed of the yaw angle of the heading meter can be calculated through the first orientation data sequence, the second orientation data sequence and the shooting time sequence.
Specifically, referring to fig. 3, step S12 includes:
step S121: calculating a first difference value of each first orientation data and the first orientation data in the first orientation data sequence; calculating a second difference value between each second orientation data and the first second orientation data in the second orientation data sequence; and calculating the difference value of the shooting time of each shooting time and the first shooting time in the shooting time sequence.
In an embodiment, the shooting times in the shooting time sequence and the statistical times in the statistical time sequence are sorted in chronological order, and the first shooting time and the first statistical time may be the same time (initial time), that is, the first orientation data and the first second orientation data correspond to the same time, that is, tv0 is ta0, and it can be considered that the orientation at this time (initial time) is not offset by the yaw angle.
Specifically, a first difference value between each first orientation data in the first orientation data sequence qv and the first orientation data is calculated, for example, Δ v0-qv 0-qv0, Δ v1-qv 1-qv0, Δ v2-qv 2-qv0, Δ v3-qv 3-qv0, … … Δ vi-qv 0). A second difference is calculated between each second orientation data in the sequence of second orientation data and the first second orientation data, e.g., Δ a0-qa 0-qa0, Δ a1-qa 1-qa0, Δ a2-qa 2-qa0, Δ a3-qa 3-qa0, … … Δ ai-qai-qa 0). The difference in shooting time between each shooting time and the first shooting time in the sequence of shooting times is calculated, for example, Δ tv0 ═ tv1-tv0, Δ tv1 ═ tv2-tv0, Δ tv2 ═ tv3-tv0, … … Δ tvi ═ tv 0.
Step S122: and calculating the offset speed of the yaw angle of the orientation meter by using the first difference, the second difference and the shooting time difference corresponding to the shooting time.
When the first difference, the second difference, and the shooting time difference are obtained, the offset speed of the yaw angle of the heading meter can be calculated according to the first difference, the second difference, and the shooting time difference. The first difference is obtained from the captured image, and thus can be considered as global information obtained by using visual positioning to determine the deviation rule of the heading meter yaw angle.
Through analysis, the deviation of the heading meter yaw angle fluctuates in a short time window, but is relatively stable in a long time window, the deviation angle is in a linear relation with time, the frequency of the heading meter counting a second heading data sequence is high, but in the actual tracking process of the visual odometer, each frame of image corresponds to a first heading data, and when the image is shot, each preset time is needed for shooting, so the frequency of obtaining the first heading data sequence is low. In this regard, the shooting time may be used as a reference, and for each shooting time, the corresponding first difference, second difference, and shooting time difference are obtained, and the yaw angle of the heading meter corresponding to each shooting time is calculated.
Specifically, any shooting time in the shooting time series may be referred to as a target shooting time; for the target capture time, there is a corresponding captured image and first orientation data, and from the first orientation data, a corresponding first difference, referred to as a target first difference, may be determined. Secondly, in the statistical time sequence, a target statistical time closest to the target shooting time can be determined, for example, the target shooting time is 9 o ' clock 1 min 10 sec, and two statistical times, namely 9 o ' clock 1 min 6 sec and 9 o ' clock 1 min 11 sec, are around 9 o ' clock 1 min 10 sec in the statistical time sequence, so that 9 o ' clock 1 min 11 sec is the target statistical time; after the target statistical time is determined, corresponding second orientation data can be determined according to the target statistical time, and a corresponding second difference value is determined according to the second orientation data, and the second difference value is called a target second difference value. Then, the offset speed of the yaw angle of the heading meter corresponding to the target shooting time, that is, the offset speed vdi ═ Δ vi- Δ ai)/Δ tvi ═ yaw (qvi-qv0) -yaw (qai-qa0))/(tvi-tv0) may be determined according to the target first difference, the target second difference, and the shooting time difference corresponding to the target shooting time (i.e., the difference between the target shooting time and the first shooting time).
For each shooting time of the shooting time sequence, the steps can be executed to obtain the offset speed of the yaw angle corresponding to each shooting time, and further obtain an offset speed sequence formed by a series of offset speeds. Then, the average value of the offset speeds of the yaw angles of the heading meters corresponding to all shooting times can be calculated, and further the offset speeds of the yaw angles of the heading meters in the whole preset time period can be obtained, namely the average offset speed of all the offset speeds in the offset speed sequence is calculated, so that the offset rule of the heading meters is obtained. Of course, in practice, statistical indexes such as the median and mode of the offset velocity sequence may be used as the offset velocity of the yaw angle of the heading meter.
Step S13: and obtaining compensation data corresponding to each second orientation data according to the offset speed.
After the offset speed of the yaw angle of the heading meter in the whole preset time period is determined, the compensation data corresponding to each second heading data can be determined according to the offset speed.
Referring to fig. 4, step S13 specifically includes:
step S131: and calculating to obtain the statistical time difference between each statistical time and the first statistical time in the statistical time sequence.
Specifically, the statistical time difference between each statistical time and the first statistical time in the statistical time series is calculated, for example, Δ ta 0-ta 1-ta0, Δ ta1-ta 2-ta0, Δ ta2-ta 3-ta0, and … … Δ tai-ta 0.
Step S132: and calculating to obtain compensation data corresponding to each second orientation data according to the statistical time difference and the offset speed.
Specifically, for each second orientation data in the second orientation data sequence counted by the orientation meter, the elapsed time Δ tai-ta0 is required, and the compensation data corresponding to each second orientation data is obtained according to the offset speed calculated in step S12 as follows: x ═ vdi × Δ tai, and the calculated compensation data is converted into a corresponding relative rotation amount, for example, Y ═ rot (X).
Step S14: each second orientation data in the second orientation data sequence is compensated based on the compensation data.
Specifically, a product of each second orientation data and the compensation data is calculated, and then each second orientation data after compensation is obtained. And inputting the compensated second orientation data into the visual mileage calculation, so that the visual positioning result is more accurate. Specifically, the compensated second orientation data is input into the visual odometer technology for computer visual positioning, so that the tracking precision of the visual odometer is more accurate.
The method obtains the offset speed of the yaw angle of the heading indicator through calculation; obtaining compensation data corresponding to each second orientation data according to the offset speed; and compensating each second orientation data in the second orientation data sequence based on the compensation data so as to compensate the yaw angle, thereby eliminating or reducing the negative influence of the yaw angle deviation on the tracking precision. The global information of the visual positioning is utilized to make up the continuous deviation of the yaw angle of the heading meter, and the tracking precision of the visual mileage calculation method is improved.
Referring to fig. 5, a schematic structural diagram of the yaw angle compensation device of the present invention includes: adata acquisition module 51, aspeed calculation module 52, a compensationdata calculation module 53, and acompensation module 54.
Thedata acquisition module 51 is configured to acquire a first orientation data sequence and a shooting time sequence of an image sequence shot in a predetermined time; and acquiring a second orientation data sequence of the orientation meter in the preset time.
Thespeed calculation module 52 is configured to calculate a deviation speed of a yaw angle of the heading meter according to the first heading data sequence, the second heading data sequence, and the shooting time sequence.
The compensationdata calculating module 53 is configured to obtain compensation data corresponding to each second orientation data according to the offset speed.
Thecompensation module 54 is configured to compensate each of the second orientation data in the second orientation data sequence based on compensation data. In an embodiment, a product of each second orientation data and the compensation data is calculated to obtain each compensated second orientation data. The compensated second orientation data may then be input into the visual odometry calculation, thereby making the visual positioning result more accurate.
Further, thespeed calculating module 52 is further configured to calculate a first difference between each first orientation data in the first orientation data sequence and a first orientation data; calculating a second difference value between each second orientation data and the first second orientation data in the second orientation data sequence; calculating the difference value of the shooting time of each shooting time and the first shooting time in the shooting time sequence; and calculating the offset speed of the yaw angle of the orientation meter by using the first difference, the second difference and the shooting time difference corresponding to the shooting time.
Further, thespeed calculating module 52 is further configured to calculate, by using the first difference, the second difference and the difference between the shooting times corresponding to each shooting time, an offset speed of a yaw angle of the heading meter corresponding to each shooting time; and calculating the offset speed of the yaw angle of the orientation meter according to the offset speeds of the yaw angle of the orientation meter corresponding to all the shooting time.
Further, thedata obtaining module 51 is further configured to obtain a statistical time sequence corresponding to the second orientation data sequence, where statistical time in the statistical time sequence corresponds to second orientation data in the second orientation data sequence one to one;
thespeed calculation module 52 is further configured to determine, for any target shooting time, a corresponding target first difference according to the first orientation data corresponding to the target shooting time; determining target statistic time closest to the target shooting time in the statistic time sequence, and determining a corresponding target second difference according to second orientation data corresponding to the target statistic time; and calculating the offset speed of the yaw angle of the heading meter corresponding to the target shooting time according to the target first difference, the target second difference and the shooting time difference corresponding to the target shooting time.
Further, the first orientation data and the first second orientation data correspond to the same time.
Further, thedata obtaining module 51 is further configured to obtain a statistical time sequence corresponding to the second orientation data sequence, where statistical time in the statistical time sequence corresponds to second orientation data in the second orientation data sequence one to one;
the compensationdata calculating module 53 is further configured to calculate a statistical time difference between each statistical time in the statistical time sequence and the first statistical time; and calculating to obtain the compensation data corresponding to each second orientation data according to the statistical time difference and the offset speed.
Further, thecompensation module 54 is further configured to calculate a product of each second orientation data and the compensation data, so as to obtain each compensated second orientation data.
Further, thedata obtaining module 51 is further configured to obtain an image sequence, and obtain a shooting time of each image to obtain the shooting time sequence; processing each image by using a neural network algorithm to obtain first orientation data corresponding to each image, and obtaining the first orientation data sequence according to each first orientation data.
For specific functional implementation of each module of the yaw angle compensation device, reference may be made to each embodiment of the orientation data compensation method, which is not described herein again. The yaw angle compensation device obtains the offset speed of the yaw angle of the orientation meter through calculation; obtaining compensation data corresponding to each second orientation data according to the offset speed; and compensating each second orientation data in the second orientation data sequence based on the compensation data so as to compensate the yaw angle, thereby eliminating or reducing the negative influence of the yaw angle deviation on the tracking precision. The global information of the visual positioning is utilized to make up the continuous deviation of the yaw angle of the heading meter, and the tracking precision of the visual mileage calculation method is improved.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the invention. The electronic device comprises amemory 202 and aprocessor 201 connected to each other.
Thememory 202 is used for storing program instructions for implementing the orientation data compensation method of the device, which can be specifically referred to in the embodiments of the orientation data compensation method, and will not be described herein again.
Theprocessor 201 is used to execute program instructions stored by thememory 202.
Theprocessor 201 may also be referred to as a Central Processing Unit (CPU). Theprocessor 201 may be an integrated circuit chip having signal processing capabilities. Theprocessor 201 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Thestorage 202 may be a memory bank, a TF card, etc., and may store all information in the electronic device of the device, including the input raw data, the computer program, the intermediate operation results, and the final operation results. It stores and retrieves information based on the location specified by the controller. With the memory, the electronic device can only have the memory function to ensure the normal operation. The storage of electronic devices can be classified into a main storage (internal storage) and an auxiliary storage (external storage) according to the use, and also into an external storage and an internal storage. The external memory is usually a magnetic medium, an optical disk, or the like, and can store information for a long period of time. The memory refers to a storage component on the main board, which is used for storing data and programs currently being executed, but is only used for temporarily storing the programs and the data, and the data is lost when the power is turned off or the power is cut off.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a system server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method of the embodiments of the present application.
Fig. 7 is a schematic structural diagram of a computer-readable storage medium according to the present invention. The storage medium of the present application stores aprogram file 203 capable of implementing all the above orientation data compensation methods, wherein theprogram file 203 may be stored in the storage medium in the form of a software product, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. The aforementioned storage device includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.