Movatterモバイル変換


[0]ホーム

URL:


CN113724418B - Data processing method, device and readable storage medium - Google Patents

Data processing method, device and readable storage medium
Download PDF

Info

Publication number
CN113724418B
CN113724418BCN202110989737.0ACN202110989737ACN113724418BCN 113724418 BCN113724418 BCN 113724418BCN 202110989737 ACN202110989737 ACN 202110989737ACN 113724418 BCN113724418 BCN 113724418B
Authority
CN
China
Prior art keywords
data
point
variable
gesture
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110989737.0A
Other languages
Chinese (zh)
Other versions
CN113724418A (en
Inventor
高峻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Motors Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Autopilot Technology Co LtdfiledCriticalGuangzhou Xiaopeng Autopilot Technology Co Ltd
Priority to CN202110989737.0ApriorityCriticalpatent/CN113724418B/en
Publication of CN113724418ApublicationCriticalpatent/CN113724418A/en
Application grantedgrantedCritical
Publication of CN113724418BpublicationCriticalpatent/CN113724418B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The application provides a data processing method, a device and a readable storage medium, wherein the data processing method comprises the following steps: step S10, in response to acquiring driving gesture data comprising a plurality of gesture data points, cutting the driving gesture data into a plurality of target data segments; step S20, selecting a posture data point for the target data segment as a reference data point; step S30: and calculating a data variable of each attitude data point in the plurality of target data segments according to the reference data points so as to acquire the compressed driving attitude data. The data processing method, the data processing device and the readable storage medium can effectively compress the driving gesture data according to the requirements, effectively reduce the data volume and are more convenient to transmit and/or store.

Description

Data processing method, device and readable storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a data processing method, apparatus, and readable storage medium.
Background
The driving gesture data generally refer to real-time positions, driving tracks and the like of the vehicle, and the problems in the running process of the vehicle can be found and found according to the driving gesture data, and meanwhile, the driving habit, personal information and the like of a user can be acquired. In addition, after an accident occurs in the automobile, the reasons of the accident can be analyzed according to the operation data.
The driving gesture data is very important, and the data size is severely limited in some transmission or storage application environments, so that a lossless compression method is generally used for compressing the data. However, due to the limitation of the compression ratio, it is impossible to solve all problems of storage and transmission of images and digital video using only the lossless compression method.
Disclosure of Invention
The application provides a data processing method, a data processing device and a readable storage medium, which are used for solving the problem that the data size is strictly limited in driving data application, and improving the convenience of transmission and storage.
In one aspect, the present application provides a data processing method, specifically, the data processing method includes: step S10, in response to acquiring driving gesture data comprising a plurality of gesture data points, cutting the driving gesture data into a plurality of target data segments; step S20, selecting a posture data point for the target data segment as a reference data point; s30: and calculating a data variable of each attitude data point in the plurality of target data segments according to the reference data points so as to acquire the compressed driving attitude data.
Optionally, the step S10 in the data processing method includes: step S11, sampling the driving gesture data according to preset data precision to obtain sampling gesture data; and step S12, acquiring the plurality of cut target data segments according to the sampling gesture data.
Optionally, the step S20 in the data processing method includes: and selecting a first posture data point of each target data segment as a reference data point corresponding to the target data segment.
Optionally, the step S10 in the data processing method includes:
determining the segment number with the minimum total data amount according to the driving gesture data, and cutting the driving gesture data based on the segment number;
the total amount of data is calculated according to the following formula:
M=x[a+(y-1)×b]
wherein M is the total data amount, a is the number of bits of each gesture data point, x is the number of segments of the target data segment, y is the number of gesture data points contained in each target data segment, and b is the number of bits of each data variable.
Optionally, after the step S30 in the data processing method, the method further includes: and according to the reference data points and the data variables, reversely calculating to restore the driving posture data.
Optionally, the gesture data point in the data processing method is selected from at least one of a timestamp, a pose and a three-dimensional feature point; and/or the data variable is selected from at least one of a timestamp variable, a pose variable and a three-dimensional feature point variable.
Optionally, if the pose variable includes a pose quaternion variable and a pose three-dimensional vector variable, the step S30 in the data processing method includes:
according to the quaternion of the attitude data point to be compressed and the quaternion of the reference data point, calculating the attitude quaternion variable of the attitude data point to be compressed;
and calculating pose three-dimensional vector variables of the pose data points to be compressed according to the three-dimensional vector of the pose data points to be compressed, the rotation matrix corresponding to the quaternion of the reference data points and the three-dimensional vector of the reference data points.
Optionally, if the data variable includes a three-dimensional feature point variable, the step S30 in the data processing method includes:
and calculating the three-dimensional characteristic point variable of the gesture data point to be compressed according to the difference between the three-dimensional characteristic point of the gesture data point to be compressed and the gravity centers of the three-dimensional characteristic points of all gesture data points in the target data segment where the gesture data point to be compressed is located.
In another aspect, the present application provides a data processing apparatus, comprising: a processor and a memory storing a computer program, which, when run by the processor, implements the steps of the data processing method as described above.
In another aspect, the present application provides a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a data processing method as described above.
As described above, the data processing method, the device and the readable storage medium provided by the application can realize effective compression of the driving gesture data by cutting the driving gesture data, selecting the datum data point of the cut data segment, calculating the data variable of the gesture data point in the data segment and the like, effectively reduce the data volume, and are more convenient to transmit and/or store.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flow chart of a data processing method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of an embodiment of step S10 shown in the embodiment of FIG. 1;
fig. 3 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
The realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings. Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the element defined by the phrase "comprising one … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element, and furthermore, elements having the same name in different embodiments of the present application may have the same meaning or may have different meanings, a particular meaning of which is to be determined by its interpretation in this particular embodiment or by further combining the context of this particular embodiment.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one aspect, the present application provides a data processing method, and fig. 1 is a schematic flow chart of the data processing method according to an embodiment of the present application.
Referring to fig. 1, in one embodiment, the data processing method includes:
step S10: in response to acquiring vehicle attitude data comprising a plurality of attitude data points, the vehicle attitude data is cut into a plurality of target data segments.
It will be appreciated that cutting data into smaller units of data segments of a certain length is the basis for data compression. If the driving gesture data can be uniformly cut, compression calculation can be more conveniently carried out.
Step S20: a gesture data point is selected for the target data segment as a reference data point.
Optionally, the reference data point is a data point used as a basis for recording complete precision, for example, the first posture data point, the last posture data point or any posture data point in the middle of the driving posture data can be selected as the reference data point, and a plurality of target data segments can also be selected to share one reference data point. In this embodiment, for each target data segment, the first posture data point of each target data segment may be selected as the reference data point corresponding to the target data segment, or the first posture data point of the driving posture data may be selected as the reference data point corresponding to the target data segment.
Step S30: and calculating a data variable of each attitude data point in the plurality of target data segments according to the reference data points so as to acquire the compressed driving attitude data.
For each target data segment, if a first gesture data point is selected as a reference data point, the data variable of other gesture data points calculated later is a data increment; if the last gesture data point is selected as the reference data point, the data variable of other gesture data points calculated later is reduced; if any one of the posture data points between the first posture data point and the last posture data point is selected as the reference data point, the data increment and the data decrement exist in the data variables of other posture data points calculated later. After calculating the data variable of each attitude data point in the plurality of target data segments, the compressed driving attitude data can be obtained based on the data variables of the reference data point and each attitude data point in the target data segments.
In summary, in the data processing method provided in this embodiment, operations such as cutting driving gesture data, selecting a reference data point of a cut data segment, calculating a data variable of a gesture data point in the data segment, and the like can be implemented to effectively compress the driving gesture data, so that the data volume is effectively reduced, and more convenient to transmit and/or store.
In an embodiment, the gesture data points in the data processing method are selected from at least one of time stamps, gestures, three-dimensional feature points. In another embodiment, the data variable is selected from at least one of a time stamp variable, a pose variable, a three-dimensional feature point variable.
In the present embodiment, the pose includes a position and a posture. In three dimensions, there are 3 degrees of freedom for rotation and 3 degrees of freedom for position. Since rotation is relatively complex, redundant expressions exceeding degrees of freedom are generally used, and the literature is expressed by normalized 4 parameters, namely quaternions, namely, the 4 parameters are used for expressing the 3 degrees of freedom of rotation. Thus, the pose variable is expressed in terms of a quaternion of a 6 degree of freedom 7 parameter plus a three dimensional vector.
Fig. 2 is a specific flowchart of an embodiment of step S10 shown in the embodiment of fig. 1.
Referring to fig. 2, in an embodiment, step S10 in the data processing method includes:
step S11: and sampling the driving gesture data according to the preset data precision to obtain sampling gesture data.
Alternatively, for the time stamp, the pose, the position, and the three-dimensional feature point, different data accuracies may be preset, respectively, for example, the time stamp may be set to an accuracy of 0.1ms, the pose in the pose may be set to an accuracy rotated by 0.01 degrees, the position may be set to an accuracy of centimeter level, and the three-dimensional feature point may be set to an accuracy of centimeter level.
It should be noted that if the pose is to be used for transformation of a coordinate system (e.g., transforming a point of one coordinate system to another coordinate system), the transformed point may meet the accuracy requirement on the centimeter level. Therefore, if the pose is converted as a coordinate system, the precision of the pose needs to be set in combination with the precision of the three-dimensional feature points.
Step S12: and acquiring a plurality of cut target data segments according to the sampling gesture data.
Optionally, the acquired sampling gesture data is cut into smaller data segments with a certain length, and the optimal cutting length is obtained by using a solution method of a planning problem, so that the number of bits used in the case of meeting the precision requirement can be minimized.
In an embodiment, step S20 in the data processing method includes:
and selecting a first posture data point of each target data segment as a reference data point corresponding to the target data segment.
Alternatively, if the first pose data point is taken as the reference data point, only the data increment need be calculated. In this case, if one reference data point is selected for each target data segment, then the reference data point corresponding to each target data segment is the first pose data point for each target data segment. If the intermediate pose data point is selected as the reference data point, then the data increment and data decrement need to be calculated. If the end pose data point is selected as the reference data point, only the data decrement needs to be calculated. It will be appreciated that the further a gesture data point is from a reference data point, the more data is required to represent the data variable for that gesture data point.
In an embodiment, step S10 in the data processing method includes:
and determining the segment number with the minimum total data amount according to the driving gesture data, so as to cut the driving gesture data based on the segment number. In other embodiments, the number of segments may be determined by selecting the total amount of data that meets the usage requirements, as desired. In this embodiment, each of the target data segments includes a reference data point.
The total amount of data can be calculated according to the following formula:
M=x[a+(y-1)×b]
wherein M is the total data amount, a is the bit number of each gesture data point, x is the segment number of the target data segment, y is the gesture data point number contained in each target data segment, and b is the bit number of each data variable.
For example, assuming that 1000 attitude data points are generated after the vehicle continuously travels for 1km, the raw data amount of the driving attitude data is 512×10 in size3 bit, i.e., each pose data point is a=512 bit. Since the uniform cutting can be more convenient for compression calculation, for 1000 gesture data points in 1km, uniform cutting is performed according to the requirement of taking y=100deg.M as a cutting precision, namely, every 100m is provided with an original precision expression for reference, then x=10segment data is cut, and 5120bit is needed for all 10 reference data points. For the data variable part, only the information of each gesture data point and the information of the reference data point are considered to calculate to obtain increment, the increment is expressed by 117 bits finally, the timestamp is expressed by 21 bits, the three-dimensional feature point is expressed by 39 bits, namely b=177 bits, and the compressed information of one gesture data point is expressed under the expression range of 100m and respective precision requirements. Therefore, the whole 1km driving posture data is calculated to be about 180×103 bit, i.e. m=10 [ < 512+ (100-1): 177)]Representation, thereby realizing data compression and compressing to original data 512×103 About 35% of bit. In the case of a further embodiment of the present invention,due to the engineering constraints applied, the selected one cut segment is 72 meters.
It will be appreciated that there is a loss in accuracy for the compressed data, whereas in the development engineering of the algorithmic prototypes, raw real data may be required, or reference to raw data may be required when studying the impact analysis of the compressed data on the application results, such as in map development work, information of high frequency pose data points may be required, but if we do some map review with only these data, only a low data density. The total amount of data is required to further determine the number of segments to be cut, depending on the particular application.
In an embodiment, after performing step S30, the data processing method further includes:
and according to the reference data points and the data variables, reversely calculating to restore the driving posture data.
In this embodiment, the compressed data is restored to the pose timestamp data before compression by performing an inverse operation, i.e., performing an inverse operation on the data variable and the reference data point.
In an embodiment, step S30 in the data processing method includes:
the timestamp variable is calculated according to the following formula:
δt =tk -t0
wherein deltat As a time stamp variable, tk For the moment to be calculated, t0 Is the time of day of the reference data point.
In another embodiment, upon decompression, the time stamp is calculated according to the following formula:
tk =δt +t0
the required data volume can be effectively compressed by expressing the timestamp information of the gesture data point by the difference value of the gesture data point relative to the reference data point. For example, when the time stamp precision requirement is set to 0.1ms, the time stamp information in the original real data is expressed by 64 bits, and the compressed time stamp variable only needs to be expressed by 21 bits.
In an embodiment, if the pose variable includes a pose quaternion variable and a pose three-dimensional vector variable, step S30 in the data processing method includes:
according to the quaternion of the attitude data point to be compressed and the quaternion of the reference data point, calculating the attitude quaternion variable of the attitude data point to be compressed;
and calculating pose three-dimensional vector variables of the pose data points to be compressed according to the three-dimensional vector of the pose data points to be compressed, the rotation matrix corresponding to the quaternion of the reference data points and the three-dimensional vector of the reference data points.
Specifically, the pose variable is calculated according to the following formula:
Figure BDA0003231858940000091
in another embodiment, upon decompression, the pose is calculated according to the following formula:
Figure BDA0003231858940000092
wherein T isk =(qk ,pk ),T0 =(q0 ,p0 ),Tk For the pose of the pose data point to be compressed, T0 Is the pose of the reference pose data point. Deltaq Is pose quaternion variable, deltap Is a pose three-dimensional vector variable. q0 Quaternion of reference data point, p0 Is the three-dimensional vector of the datum point, qk Is the quaternion of the gesture data point to be compressed, pk Is a three-dimensional vector of gesture data points to be compressed. C (q)0 ) Is a rotation matrix corresponding to the quaternion of the reference data points.
With continued reference to the above embodiment, the pose information of one pose data point in the original real data is calculated by the above formula, and the pose variable of 117 bits can be used for expression after compression.
In an embodiment, if the data variable includes a three-dimensional feature point variable, step S30 in the data processing method includes:
and calculating the three-dimensional characteristic point variable of the gesture data point to be compressed according to the difference between the three-dimensional characteristic point of the gesture data point to be compressed and the gravity centers of the three-dimensional characteristic points of all gesture data points in the target data segment where the gesture data point to be compressed is located.
Specifically, three-dimensional feature point variables are calculated according to the following formula:
Figure BDA0003231858940000093
in another embodiment, upon decompression, the three-dimensional feature points are calculated according to the following formula:
Figure BDA0003231858940000094
wherein deltapf For three-dimensional feature point variables, T is the orthogonal projection calculated by orthogonal transformation, pf As a three-dimensional feature point of the gesture data point to be compressed,
Figure BDA0003231858940000095
is the center of gravity of all three-dimensional feature points in the target data segment.
It will be appreciated that the range of heights in one data segment will not be too large, e.g. in a 100m data segment, the heights may only be expressed in 20m, and thus the heights may be expressed with less data. Here, all three-dimensional feature point information in the original real data is acquired first, then the barycenter thereof is calculated as a reference, and then the three-dimensional feature point variables can be calculated by performing orthogonal transformation on other three-dimensional feature points with respect to the barycenter. With continued reference to the above embodiment, through the above formula, the three-dimensional feature point information of one gesture data point in the original real data may be expressed by using 39bit three-dimensional feature point variables after compression.
In this embodiment, since the actual track characteristics and the distribution of the corresponding reconstruction elements are considered, for example, the height of the track within 100m does not change greatly, the lateral movement and the three-dimensional feature points are not too far apart, so that a new projection direction without compression can be calculated by using a PCA (principal component analysis) method, and then the required data amount can be dynamically calculated in three directions by using different representation ranges, respectively.
Based on the same inventive concept as the previous embodiments, an embodiment of the present invention provides a data processing apparatus, as shown in fig. 3, including: aprocessor 310 and amemory 311 in which a computer program is stored; the number of theprocessors 310 illustrated in fig. 3 is not used to refer to one number of theprocessors 310, but is merely used to refer to a positional relationship of theprocessors 310 with respect to other devices, and in practical applications, the number of theprocessors 310 may be one or more; likewise, thememory 311 illustrated in fig. 3 is also used in the same sense, that is, only to refer to the positional relationship of thememory 311 with respect to other devices, and in practical applications, the number of thememories 311 may be one or more. The data processing method applied to the data processing apparatus described above is implemented when theprocessor 310 runs the computer program.
The data processing apparatus may further include: at least onenetwork interface 312. The various components in the data processing apparatus are coupled together by abus system 313. It is appreciated that thebus system 313 is used to enable connected communication between these components. Thebus system 313 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled asbus system 313 in fig. 3.
Thememory 311 may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memories. Wherein the nonvolatile Memory may be Read Only Memory (ROM), programmable Read Only Memory (PROM, programmable Read-Only Memory), erasable programmable Read Only Memory (EPROM, erasable Programmable Read-Only Memory), electrically erasable programmable Read Only Memory (EEPROM, electrically Erasable Programmable Read-Only Memory), magnetic random access Memory (FRAM, ferromagnetic random access Memory), flash Memory (Flash Memory), magnetic surface Memory, optical disk, or compact disk Read Only Memory (CD-ROM, compact Disc Read-Only Memory); the magnetic surface memory may be a disk memory or a tape memory. The volatile memory may be random access memory (RAM, random Access Memory), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (SRAM, static Random Access Memory), synchronous static random access memory (SSRAM, synchronous Static Random Access Memory), dynamic random access memory (DRAM, dynamic Random Access Memory), synchronous dynamic random access memory (SDRAM, synchronous Dynamic Random Access Memory), double data rate synchronous dynamic random access memory (ddr SDRAM, double Data Rate Synchronous Dynamic Random Access Memory), enhanced synchronous dynamic random access memory (ESDRAM, enhanced Synchronous Dynamic Random Access Memory), synchronous link dynamic random access memory (SLDRAM, syncLink Dynamic Random Access Memory), direct memory bus random access memory (DRRAM, direct Rambus Random Access Memory). Thememory 311 described in embodiments of the present invention is intended to comprise, without being limited to, these and any other suitable types of memory.
Thememory 311 in the embodiment of the present invention is used to store various types of data to support the operation of the data processing apparatus. Examples of such data include: any computer program for operating on the data processing apparatus, such as an operating system and application programs; contact data; telephone book data; a message; a picture; video, etc. The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application programs may include various application programs such as a Media Player (Media Player), a Browser (Browser), etc. for implementing various application services. Here, a program for implementing the method of the embodiment of the present invention may be included in an application program.
Based on the same inventive concept as the previous embodiments, the present application further provides a readable storage medium, in particular, a readable storage medium having a computer program stored thereon, where the readable storage medium may be a Memory such as a magnetic random access Memory (FRAM, ferromagnetic random access Memory), a Read Only Memory (ROM), a programmable Read Only Memory (PROM, programmable Read-Only Memory), an erasable programmable Read Only Memory (EPROM, erasable Programmable Read-Only Memory), an electrically erasable programmable Read Only Memory (EEPROM, electrically Erasable Programmable Read-Only Memory), a Flash Memory (Flash Memory), a magnetic surface Memory, a compact disc, or a compact disc Read Only Memory (CD-ROM, compact Disc Read-Only Memory); but may be a variety of devices including one or any combination of the above-described memories, such as a mobile phone, computer, tablet device, personal digital assistant, or the like. The computer program stored in the readable storage medium, when executed by a processor, implements the steps of the data processing method in the above embodiment. The specific step flow implemented when the computer program is executed by the processor is described with reference to the embodiment shown in fig. 1, and will not be described herein.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.

Claims (9)

1. A method of data processing, comprising:
step S10, in response to acquiring driving gesture data comprising a plurality of gesture data points, cutting the driving gesture data into a plurality of target data segments;
step S20, selecting a posture data point for the target data segment as a reference data point;
step S30, calculating a data variable of each attitude data point in the plurality of target data segments according to the reference data points so as to acquire compressed driving attitude data;
wherein, the step S10 includes: determining the segment number with the minimum total data amount according to the driving gesture data, and cutting the driving gesture data based on the segment number; the total amount of data is calculated according to the following formula:
Figure QLYQS_1
wherein,,
Figure QLYQS_2
for the total amount of data->
Figure QLYQS_3
Bit number for each pose data point, +.>
Figure QLYQS_4
For the number of segments of said target data segment, < >>
Figure QLYQS_5
The number of gesture data points contained for each target data segment, +.>
Figure QLYQS_6
The number of bits for each data variable.
2. The data processing method according to claim 1, wherein the step S10 includes:
step S11, sampling the driving gesture data according to preset data precision to obtain sampling gesture data;
and step S12, acquiring the plurality of cut target data segments according to the sampling gesture data.
3. The data processing method according to claim 1 or 2, wherein the step S20 includes:
and selecting a first posture data point of each target data segment as a reference data point corresponding to the target data segment.
4. The data processing method according to claim 1, further comprising, after the step S30:
and according to the reference data points and the data variables, reversely calculating to restore the driving posture data.
5. The data processing method of claim 1, wherein the gesture data points are selected from at least one of a timestamp, a pose, a three-dimensional feature point; the data variable is selected from at least one of a timestamp variable, a pose variable and a three-dimensional characteristic point variable.
6. The data processing method according to claim 5, wherein if the pose variable includes a pose quaternion variable and a pose three-dimensional vector variable, the step S30 includes:
according to the quaternion of the attitude data point to be compressed and the quaternion of the reference data point, calculating the attitude quaternion variable of the attitude data point to be compressed;
and calculating pose three-dimensional vector variables of the pose data points to be compressed according to the three-dimensional vector of the pose data points to be compressed, the rotation matrix corresponding to the quaternion of the reference data points and the three-dimensional vector of the reference data points.
7. The data processing method according to claim 5, wherein if the data variable includes a three-dimensional feature point variable, the step S30 includes:
and calculating the three-dimensional characteristic point variable of the gesture data point to be compressed according to the difference between the three-dimensional characteristic point of the gesture data point to be compressed and the gravity centers of the three-dimensional characteristic points of all gesture data points in the target data segment where the gesture data point to be compressed is located.
8. A data processing apparatus, comprising: a processor and a memory storing a computer program, which, when run by the processor, implements the steps of the data processing method of any one of claims 1 to 7.
9. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the data processing method according to any of claims 1 to 7.
CN202110989737.0A2021-08-262021-08-26Data processing method, device and readable storage mediumActiveCN113724418B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110989737.0ACN113724418B (en)2021-08-262021-08-26Data processing method, device and readable storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110989737.0ACN113724418B (en)2021-08-262021-08-26Data processing method, device and readable storage medium

Publications (2)

Publication NumberPublication Date
CN113724418A CN113724418A (en)2021-11-30
CN113724418Btrue CN113724418B (en)2023-07-04

Family

ID=78678197

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110989737.0AActiveCN113724418B (en)2021-08-262021-08-26Data processing method, device and readable storage medium

Country Status (1)

CountryLink
CN (1)CN113724418B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5371499A (en)*1992-02-281994-12-06Intersecting Concepts, Inc.Data compression using hashing
WO2020164284A1 (en)*2019-02-122020-08-20平安科技(深圳)有限公司Method and apparatus for recognising living body based on planar detection, terminal, and storage medium
CN112815939A (en)*2021-01-042021-05-18清华大学深圳国际研究生院Pose estimation method for mobile robot and computer-readable storage medium
CN112888024A (en)*2019-11-292021-06-01腾讯科技(深圳)有限公司Data processing method, data processing device, storage medium and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4453778A (en)*1980-01-101984-06-12Ford Motor CompanyProportioning hydraulic brake mechanism
CN102003298A (en)*2010-11-262011-04-06天津大学Real-time feedback device and method of combustion information for controlling engine
AT511286B1 (en)*2012-06-082018-07-15Avl List Gmbh Method for processing calibration data
CN106656201B (en)*2016-12-232020-06-30燕山大学 A Compression Method Based on Amplitude-Frequency Characteristics of Sampled Data
CN111857550B (en)*2019-04-292024-03-22伊姆西Ip控股有限责任公司Method, apparatus and computer readable medium for data deduplication
CN110826566B (en)*2019-11-012022-03-01北京环境特性研究所Target slice extraction method based on deep learning
CN112350734B (en)*2020-11-272024-07-16湖北工业大学Incremental coding-based battery state data compression reconstruction method for electric automobile

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5371499A (en)*1992-02-281994-12-06Intersecting Concepts, Inc.Data compression using hashing
WO2020164284A1 (en)*2019-02-122020-08-20平安科技(深圳)有限公司Method and apparatus for recognising living body based on planar detection, terminal, and storage medium
CN112888024A (en)*2019-11-292021-06-01腾讯科技(深圳)有限公司Data processing method, data processing device, storage medium and electronic equipment
CN112815939A (en)*2021-01-042021-05-18清华大学深圳国际研究生院Pose estimation method for mobile robot and computer-readable storage medium

Also Published As

Publication numberPublication date
CN113724418A (en)2021-11-30

Similar Documents

PublicationPublication DateTitle
EP2782027A1 (en)Apparatus and method providing augmented reality contents based on web information structure
CN112964291B (en) A sensor calibration method, device, computer storage medium and terminal
CN114648615B (en)Method, device and equipment for controlling interactive reproduction of target object and storage medium
CN113282535B (en)Quantization processing method and device and quantization processing chip
CN111639652B (en)Image processing method, device and computer storage medium
CN115470156A (en) RDMA-based memory usage method, system, electronic device and storage medium
CN113724418B (en)Data processing method, device and readable storage medium
CN114785771B (en)Automatic driving data uploading method and device, computer equipment and storage medium
CN113010403B (en) A flight control software testing system and method based on shared memory
Chapman et al.Multiscale modeling of defect phenomena in platinum using machine learning of force fields
CN115294280A (en)Three-dimensional reconstruction method, apparatus, device, storage medium, and program product
CN113704374B (en)Spacecraft trajectory fitting method, device and terminal
US8209364B2 (en)Electronic devices and operation methods of a file system
CN114782960B (en)Model training method and device, computer equipment and computer readable storage medium
CN117315674A (en)Image text editing method and device and electronic equipment
CN116483963A (en)Virtual robot dialogue method, device, computer equipment and storage medium
CN114584476B (en) A flow prediction method, network training method, device and electronic equipment
CN115219223A (en)Motion attitude monitoring method and device for vehicle power assembly and storage medium
CN116811894B (en)Continuous driving style identification method, system and equipment
CN118379070A (en)Identification tracing method, system and storage medium based on meta universe
US20250265183A1 (en)Memory-efficient encoding and decoding of boundary conditions in the lattice boltzmann method
CN117113936A (en)Note information input method, device, equipment, storage medium and program product
CN118410250A (en)Point cloud data labeling method and device, storage medium and electronic equipment
CN119396673A (en) Log calling method, device, computer equipment, readable storage medium and program product
CN118405126A (en)Vehicle early warning method, device, computer equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
TR01Transfer of patent right

Effective date of registration:20240305

Address after:510000 No.8 Songgang street, Cencun, Tianhe District, Guangzhou City, Guangdong Province

Patentee after:GUANGZHOU XIAOPENG MOTORS TECHNOLOGY Co.,Ltd.

Country or region after:China

Address before:Room 46, room 406, No.1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou City, Guangdong Province

Patentee before:Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

Country or region before:China

TR01Transfer of patent right

[8]ページ先頭

©2009-2025 Movatter.jp