Movatterモバイル変換


[0]ホーム

URL:


CN113029190B - Motion tracking system and method - Google Patents

Motion tracking system and method
Download PDF

Info

Publication number
CN113029190B
CN113029190BCN201911249056.XACN201911249056ACN113029190BCN 113029190 BCN113029190 BCN 113029190BCN 201911249056 ACN201911249056 ACN 201911249056ACN 113029190 BCN113029190 BCN 113029190B
Authority
CN
China
Prior art keywords
motion
position information
sensing data
information
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911249056.XA
Other languages
Chinese (zh)
Other versions
CN113029190A (en
Inventor
黄靖甯
谢毅刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Future City Co ltd
Original Assignee
Future City Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Future City Co ltdfiledCriticalFuture City Co ltd
Priority to CN201911249056.XApriorityCriticalpatent/CN113029190B/en
Publication of CN113029190ApublicationCriticalpatent/CN113029190A/en
Application grantedgrantedCritical
Publication of CN113029190BpublicationCriticalpatent/CN113029190B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention provides a motion tracking system and a motion tracking method. The motion tracking system includes three motion sensing devices that are wearable on a body part of a user. In the method, first sensed data is obtained based on a motion sensor disposed on a motion sensing device, and second sensed data is obtained based on wireless signals transmitted between the motion sensing devices. The movement information of the user is determined by the determinants. The determining factor includes first sensing data and second sensing data. Thus, accuracy in tracking user movements may be improved.

Description

Motion tracking system and method
Technical Field
The present invention relates generally to a method for tracking motion of a user, and in particular to a motion tracking system and a motion tracking method.
Background
In order to provide intuitive operations on electronic devices (such as gaming machines, computers, smartphones, smart appliances, etc.), a user's motion may be detected to directly operate the electronic device according to the user's motion.
In conventional technology, some electronic devices may allow human body parts (such as hands, legs, heads, etc.) of a user to control the operation of the electronic devices, and may track the movements of the human body parts. However, these electronic devices provide only one way to detect movement of multiple body parts simultaneously. For example, a Virtual Reality (VR) product may provide hand-held controllers, and each hand-held controller includes an inertial measurement unit (inertial measurement unit; IMU) to track the movement of a user's hand. Sometimes, only one motion tracking approach may be limited by its hardware or tracking mechanism and result in abnormal or inaccurate tracking results.
Disclosure of Invention
Sometimes, tracking results using only a single sensor may be inaccurate. Accordingly, the present invention is directed to a motion tracking system and method.
In one of the exemplary embodiments, the motion tracking method is applicable to a motion tracking system comprising a first motion sensing device, a second motion sensing device, and a third motion sensing device wearable on a human body part of a user. The motion tracking method includes, but is not limited to, the following steps. The first sensed data is obtained based on motion sensors disposed on the first, second, and third motion sensing devices. The second sensed data is obtained based on wireless signals transmitted between the first motion sensing device, the second motion sensing device, and the third motion sensing device. Motion information of the user is determined according to a determinant including the first sensing data and the second sensing data.
In one of the exemplary embodiments, the motion tracking system includes, but is not limited to, three motion sensing devices and a processor. The motion sensing device may be wearable on a body part of the user. Each motion sensing device includes a wireless transceiver and a motion sensor. The wireless transceiver is used for transmitting or receiving wireless signals. The motion sensor is used for sensing the motion of a body part of a user. The processor is configured to obtain first sensed data based on the motion sensors of the motion sensing devices and second sensed data based on wireless signals transmitted between the three motion sensing devices, and determine motion information of the user from determinants comprising the first sensed data and the second sensed data.
Based on the above, the motion tracking system and the motion tracking method according to the embodiments of the present invention can track the motion of a single operation portion of a user based on the sensing data from the wireless signal and the motion sensor. Thus, an accurate and reliable tracking mechanism can be provided.
It should be understood, however, that this summary may not contain all aspects and embodiments of the present invention, is not intended to be limiting or restrictive in any way, and that the invention as disclosed herein is to be understood by and will be construed by persons of ordinary skill in the art to encompass obvious improvements and modifications thereto.
Drawings
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
FIG. 1 is a block diagram illustrating a motion tracking system according to one of the exemplary embodiments of the present invention;
FIG. 2 is a schematic diagram illustrating a motion tracking system according to one of the exemplary embodiments of the present invention;
FIG. 3 is a flowchart illustrating a method of motion tracking according to one of the exemplary embodiments of the present invention;
FIG. 4 is a schematic diagram illustrating a method of motion tracking according to one of the exemplary embodiments of the present invention;
Fig. 5 is a schematic diagram illustrating a motion tracking method according to one of the exemplary embodiments of the present invention.
Description of the reference numerals
A motion tracking system;
A motion tracking system;
A motion sensing device 100;
a wireless transceiver 110;
130, a motion sensor;
200, computing equipment;
240, a memory;
250 a processor;
300, a head mounted display;
a wireless transceiver;
360, an image sensor;
b1, B2, B3, B4 and B5 are parts of human body;
s310, S330 and S350;
FOV, field of view.
Detailed Description
Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the description to refer to the same or like parts.
Fig. 1 is a block diagram illustrating a motion tracking system 10 according to one of the exemplary embodiments of the present invention. Referring to fig. 1, a motion tracking system 10 includes, but is not limited to, three or more multiple motion sensing devices 100 and a computing device 200. The motion tracking system 10 may be adapted for use with VR, AR, MR, XR or other reality-related technologies.
Each motion sensing device 100 includes, but is not limited to, a wireless transceiver 110 and a motion sensor 130. The motion sensing device 100 may be a handheld controller or a wearable device, such as a wearable controller, a smart watch, an ankle sensor, a belt, a head-mounted display (HMD), or the like. In one embodiment, each motion sensing device 100 may be worn on a human body part of a user. The body part may be a hand, head, ankle, leg, waist or other part.
The wireless transceiver 110 may be a communication transceiver compatible with bluetooth, wi-Fi, IR, RFID, or other wireless communication technologies. In one embodiment, the wireless transceiver 110 is used to transmit and/or receive wireless signals with the wireless transceivers 110 of other motion sensing devices 100, and the first sensed data of the sequence will be generated based on the wireless signals transmitted between the motion sensing devices 100. A detailed flow for generating the first sensed data of the sequence will be described later.
The motion sensor 130 may be an accelerometer, a gyroscope, a magnetometer, an Inertial Measurement Unit (IMU), or any combination of the foregoing. In an embodiment, the motion sensor 130 is configured to sense motion of a corresponding human body part of a user wearing the motion sensing device 100 over a period of time, to generate a sequence of first sensed data from sensing results (such as acceleration, rotation, magnetic force, etc.) of the motion sensor 130 at a plurality of time points over the period of time. For one example, the first sensed data includes 3-degree of freedom (3-degree of freedom; 3-DoF) data, and the 3-DoF data relates to orientation information of a human body part in three-dimensional (3D) space, such as acceleration of yaw, roll, and pitch.
The computing device 200 includes, but is not limited to, a memory 240 and a processor 250. The computing device 200 may be one of a computer, server, smart phone, tablet, or motion sensing device 100.
The Memory 240 may be any type of fixed or removable Random Access Memory (RAM), read-only Memory (ROM), flash Memory or the like, or a combination thereof. Memory 240 may be used to store program codes, device configurations, buffer data, or persistent data (such as sensed data, motion information, distance relationships, etc.), and such data will be described later.
The processor 250 is connected to the memory 240, and the processor 250 is configured to load program codes stored in the memory 240, thereby executing the programs of the exemplary embodiments of the present invention. In one embodiment, the functions of the processor 150 are implemented using programmable units such as a central processing unit (central processing unit; CPU), microprocessor, microcontroller, digital Signal Processing (DSP) chip, field programmable gate array (field programmable GATE ARRAY; FPGA), or the like. In some embodiments, the functions of the processor 250 may also be implemented by a separate electronic device or integrated circuit (INTEGRATED CIRCUIT; IC), and the operations of the processor 250 may also be implemented by software.
It should be noted that processor 250 may or may not be disposed on the same device as one, part of, or all of motion sensing device 100. However, devices equipped with the motion sensor 130 and the processor 250, respectively, may also include communication transceivers with compatible communication technologies such as Bluetooth, wi-Fi, IR, or physical transmission lines to transmit/receive data to/from each other.
In one embodiment, the motion tracking system 10 may also include a Head Mounted Display (HMD) 300.HMD 300 may be worn on the head of a user. HMD 300 includes, but is not limited to, a wireless transceiver 310 and an image sensor 360.
The description of wireless transceiver 310 may refer to the description of wireless transceiver 110 and will be omitted. This means that the HMD 300 may communicate with the motion sensing device 100 through the wireless transceiver 310.
The image sensor 360 may be a camera, such as a monochrome or color camera, a depth camera, a video recorder, or other sensor capable of capturing images.
Fig. 2 is a schematic diagram illustrating a motion tracking system 20 according to one of the exemplary embodiments of the present invention. Referring to fig. 2, the motion tracking system 20 includes an HMD 300 and four motion sensing devices 100 (the four devices are two ankle sensors worn on a human body part B1 and a human body part B2 (i.e., two ankles) and two hand-held controllers worn on a human body part B3 and a human body part B4 (i.e., two hands)). In some embodiments, the HMD 300 may also include another motion sensor 130 (not shown) to obtain orientation information of the human body part B5 (i.e., the head). The processor 250 is embedded in the HMD 300.
It should be noted that the motion tracking system 20 is merely an example to illustrate the manner in which the motion sensing device 100, HMD 300, and processor 250 are positioned. However, many other embodiments of behavior understanding system 10 exist and the invention is not limited thereto.
In order to better understand the operational flow provided in one or more embodiments of the present invention, several embodiments will be illustrated below to explain the operational flow of motion tracking system 10 or motion tracking system 20 in detail. The devices and modules in the motion tracking system 10 or the motion tracking system 20 are applied in the following embodiments to explain the control methods provided herein. Each step of the control method may be adjusted according to actual implementation and should not be limited to what is described herein.
Fig. 3 is a flowchart illustrating a motion tracking method according to one of the exemplary embodiments of the present invention. Referring to fig. 3, the processor 250 may obtain first sensing data based on the motion sensors 130 disposed on the three motion sensing apparatuses 100 (step S310). In particular, with respect to the different types of motion sensors 130, acceleration, rotation, magnetic force, orientation, and/or 3-DoF of motion of the corresponding human body part in 2D/3D space may be obtained, and one or more sensing results of the motion sensors 130 will become sequential first sensing data of the human body part.
On the other hand, the processor 250 may obtain second sensing data based on wireless signals transmitted between the three motion sensing devices 100 (step S330). In one embodiment, processor 250 may obtain signal strengths of wireless signals from three or more motion sensing devices 100 at multiple points in time, and each signal strength will be recorded in memory 240 along with its corresponding transmitter and receiver. The signal strength may be a received signal strength indication (RECEIVED SIGNAL STRENGTH indication; RSSI), a received channel power parameter (RECEIVED CHANNEL power indicator; RCPI), a reference signal received power (REFERENCE SIGNAL RECEIVED power; RSRP), or the like. In one embodiment, the motion sensing device 100 may monitor the signal strength of all detectable wireless signals, and each wireless signal includes a specific identifier of a transmitter and/or receiver. The motion sensing device 100 may further feed back the signal strength with the corresponding identifier to the computing device 200. In another embodiment, the computing device 200 may monitor the signal strength of all detectable wireless signals, and the processor 250 records the signal strength and the corresponding identifier of the transmitter in the memory 240. The signal strength will be recorded for a period of time to generate a sequence of second sensed data. This means that the second sensed data contains signal intensities of a time-ordered sequence.
In some embodiments, processor 250 may further obtain third sensed data based on an image captured from image sensor 360. The third sensed data may be the images of the sequence and/or the sensed result of the pixels in the images (such as brightness, color, depth, etc.).
Next, the processor 250 may determine motion information of the user from the determining factors including the first sensing data and the second sensing data (step S350). In one embodiment, the motion information may include location information and orientation information. First, with respect to location information, in one embodiment, the processor 250 may determine location information of the user from the first sensed data. In this embodiment, the determining factor includes the first sensing data. The displacement of the corresponding human body part may be estimated by double integration of the detected accelerations of the human body part in three axes (i.e., the second sensed data) to further determine the position information based on the displacement. For example, the positional information may be coordinates on two or three axes, a position relative to a reference, or the like.
In another embodiment, the processor 250 may obtain the location information from second sensed data based on wireless signals between the three motion sensing devices 100. In this embodiment, the determining factor includes the second sensing data, and it should be noted that the signal strength of the wireless signal is related to the relative distance between the two motion sensing devices 100. Additionally, based on trilateration, three distances between three points may be used to determine relative position information for the three points. Assuming three motion sensing devices 100 as the aforementioned three points, processor 250 may determine the relative distance between each two motion sensing devices 100 as a distance relationship between motion sensing devices 100. Processor 250 may then generate location information for the tracking device based on the distance relationship and the trilateration.
Taking the motion tracking system 20 as an example, the processor 250 may obtain signal strengths of wireless signals from the motion sensing device 100 for the human body part B3 to the HMD 300 for the human body part B5 (in this embodiment, one of the motion sensing devices 100), wireless signals from the motion sensing device 100 for the human body part B4 to the HMD 300 for the human body part B5, and wireless signals from the motion sensing device 100 for the human body part B3 to the motion sensing device 100 for the human body part B4. Processor 250 may determine its distance relationship from the signal strengths and then generate location information for human body part B3 based on the distance relationship. The location information may be coordinates or relative locations.
It should be noted that embodiments are not limited to selecting three motion sensing devices 100. For example, the signal strengths of the wireless signals from the motion sensing apparatus 100 for the human body part B2 to the motion sensing apparatus 100 for the human body part B3, the wireless signals from the motion sensing apparatus 100 for the human body part B3 to the motion sensing apparatus 100 for the human body part B1, and the wireless signals from the motion sensing apparatus 100 for the human body part B2 to the motion sensing apparatus 100 for the human body part B1 may be used to estimate the position information of the human body part B1. The combination of motion sensing devices 100 may vary as desired.
In another embodiment, the processor 250 may determine the location information of the user according to the third sensing data. In this embodiment, the determining factor includes third sensing data. The position and displacement of the body part in the image can be used to determine position information in the real environment. Taking fig. 2 as an example, the sensed intensities and pixel positions corresponding to the human body part B4 in the image may be used to estimate depth information of the human body part B4 (i.e., distance relative to the HMD 300) and to estimate the 2D position of the human body part B4 on a plane parallel to the image sensor 360.
It should be noted that the accuracy of the location information based on only one sensing approach (e.g., based on one of the wireless transceiver 110, the motion sensor 130, and the image sensor 360) may be different. Thus, two or more sensing approaches may be used to determine position information for the corresponding body part.
In one embodiment, the processor 250 may obtain first location information from the first sensed data, second location information from the second sensed data, and adjusted location information from the first location information and the second location information. In this embodiment, the determining factor includes first sensing data and second sensing data. Processor 250 may determine the location information based on a combination of the first location information and the second location information. In some embodiments, the combination is a weighted combination. And determining the adjusted position information according to the weighted first position information and the weighted second position information.
In one embodiment, the weight of the weighted combination of the first location information and the second location information may be fixed. In another embodiment, the weight of the weighted combination of the first location information and the second location information may be varied. The weight of the first position information may be a value from 0 to 100%, and the weight of the second position information may be a value from 0 to 100%. However, the weights of the first position information and the second position information cannot be 0 at the same time.
It should be noted that in some embodiments, the location information determined based on the third sensed data generated by the image of the image sensor 360 may be more accurate than the location information determined based on the wireless transceiver 110 and/or the motion sensor 130. Thus, in one embodiment, the determinants may include the second sensed data and the third sensed data. The processor 250 may determine the location information according to a combination of location information obtained based on the first sensing data, the second sensing data, and the third sensing data.
In one embodiment, the processor 250 may obtain a first portion of the positional information from the second sensed data for a first duration, obtain a second portion of the positional information from the third sensed data for a second duration, and combine the first portion and the second portion of the positional information into combined positional information. The third sensed data of the detected human body part may be used to correct the position information based on the second sensed data for the first duration and the second duration. Processor 250 may determine combined location information based on the first location and the second location of the location information for different durations. For example, a position (1, 1) is determined based on the second sensed data for a first duration, another position (2, 1) is determined based on the third sensed data for a second duration, and the combined position information may be a displacement from position (1, 1) to position (2, 1).
In some embodiments, the processor 250 may determine the location information from a weighted combination of the second location information and the third location information. The weights of the second and third location information may be varied or fixed based on the actual situation. For example, the weight of the third location information may be greater than the weight of the second location information. In another embodiment, the location information is a weighted combination if the human body part is present in the third sensed data and the location is the second location information if the human body part is not present in the third sensed data.
In one embodiment, the image sensor 360 may be designed with a particular field of view (SPECIFIC FIELD of view; FOV). If a body part is positioned outside of the field of view of the image sensor 360, the processor 250 may not be able to determine motion information for that body part using only the third sensed data, and either the first sensed data or the second sensed data should be considered.
In one embodiment, the processor 250 may determine whether one human body part of the user is present in the third sensing data of the sequence, and determine whether to use the distance relationship between the three motion sensing devices 100 according to the result of the determination of the presence of the human body part to determine the position information based on trilateration. Processor 250 may use machine learning techniques, such as deep learning, artificial neural networks (ARTIFICIAL NEURAL NETWORK; ANN) or support vector machines (support vector machine; SVM), etc., to identify the target human body part in the third sensed data.
Fig. 4 is a schematic diagram illustrating a motion tracking method according to one of the exemplary embodiments of the present invention. Referring to fig. 4, it is assumed that the motion sensing device 100 of the human body part B4 is a target device. In this figure, the human body part B4 is present in the field of view FOV of the HMD 300 (i.e., the human body part B4 is present in the third sensed data).
Fig. 5 is a schematic diagram illustrating a motion tracking method according to one of the exemplary embodiments of the present invention. Referring to fig. 5, it is assumed that the motion sensing device 100 of the human body part B3 is a target device. In this figure, the human body part B5 is not present in the field of view FOV of the HMD 300 (i.e., the human body part B3 is not present in the third sensed data).
It should be noted that the size and shape of the fields of view shown in fig. 4 and 5 are merely examples, and may be modified based on actual needs.
Accordingly, the field of view of the image sensor 360 is used to determine whether a human body part is present in the third sensed data. In one embodiment, it is assumed that the human body part is located outside the field of view (i.e., not present in the third sensed data) for a first duration and the human body part is located within the field of view of the image sensor 360 (i.e., present in the third sensed data) for a second duration. In some embodiments, it is assumed that the body part is positioned inside the field of view of the image sensor 360 for a first duration and a second duration.
In another embodiment, the processor 250 may obtain the first position information according to the first sensing data, obtain the second position information according to the second sensing data, obtain the third position information according to the third sensing data, and obtain the adjusted position information according to the first position information, the second position information, and the third position information. In this embodiment, the determining factors include first sensing data, second sensing data and third sensing data. Processor 250 may determine the adjusted location information based on a combination of the first motion information, the second motion information, and the third location information.
In one embodiment, the combination is a weighted combination. Processor 250 may determine a first weight for the first location information and a second weight for the second location information based on the third location information. In one embodiment, the first weight and the second weight are iteratively varied. The third position information will be regarded as corrected position information for a duration in which the human body part is present in the third sensing data, and a weighted combination of the first position information and the second position information having the first weight and the second weight will be adjusted according to the third position information. It should be noted that the processor 250 may obtain the first parameter by multiplying the first weight with the first position information, obtain the second parameter by multiplying the second weight with the second position information, and obtain the adjusted position information by adding the first parameter to the second parameter to obtain the weighted combination.
In one embodiment, the first weight and the second weight for the subsequent point in time may be determined based on an equation in which the third location information is equal to a weighted combination of the first location information and the second location information for the previous point in time. For example, at a third point in time, in the 3-dimensional coordinate system, the first weight is 0.5 and the second weight is 0.5, the first location information is (6, 6) and the second location information is (10,10,10), and the adjusted location information will be (8, 8). If the third location information is (7, 7), the first weight and the second weight at the fourth point in time will be determined to be 0.75 and 0.25, respectively. Next, at a fourth point in time, if the first location information is (7,6,6) and the second location information is (12,10,10) in the 3-dimensional coordinate system, the adjusted location information will be (8.25,7,7).
In another embodiment, the first weight and the second weight of the current point in time may be determined based on an equation in which the third location information is equal to a weighted combination of the first location information and the second location information of the current point in time. For example, at a second point in time, in a 3-dimensional coordinate system, the first location information is (6, 6) and the second location information is (10,10,10). If the third location information is (7, 7), the first weight and the second weight at the second point in time will be determined to be 0.75 and 0.25, respectively. Then, the adjusted position information at the second point in time will be determined as (7, 7).
In some embodiments, the first weight and the second weight are fixed if a human body part of the user is not present in the third sensed data. If the body part is located outside the field of view of the image sensor 360, the third weight and the second weight will be the same as the previous first weight and second weight of the previous point in time when the body part of the user is still present in the third sensed data. For example, at a first point in time, the human body part is located within the field of view of the image sensor 360, and the first weight is 0.5 and the second weight is 0.5. Next, at a second point in time, the human body part is located outside the field of view of the image sensor 360. As with the first weight and the second weight at the first point in time, at the second point in time the first weight will be 0.5 and the second weight will be 0.5. Until the human body part of the user exists in the third sensed data, the first weight and the second weight will vary according to the third sensed data.
In another embodiment, the processor 250 may determine the adjusted location information based on a weighted combination of the first location information, the second location information, and the third location information. And determining the adjusted position information according to the sum of the weighted first position information, the weighted second position information and the weighted third position information. The weights of the three pieces of position information may be varied or fixed based on actual conditions.
On the other hand, with respect to orientation information, in one embodiment, the processor 250 may use the first sensed data of the sequence directly as orientation information. For example, the orientation information may be acceleration, angular velocity in three axes, orientation, 3-DoF information, and/or 6-DoF information.
In another embodiment, the processor 250 may determine the orientation information from the third sensed data. Taking fig. 4 as an example, two poses of the human body part B4 in the image at different points in time can be used to estimate the orientation information.
In some embodiments, the processor 250 may determine the orientation information from the first sensed data and the third sensed data. The orientation information may be a weighted combination of the first sensed data and the third sensed data. For example, the position information is determined from the sum of the weighted first orientation information based on the motion sensor 130 and the weighted second orientation information based on the image sensor 360.
In one embodiment, the field of view of the image sensor 360 will be a condition as to whether orientation information is used in accordance with the third sensed data. If the human body part exists in the third sensed data, orientation information may be determined according to the first sensed data and the third sensed data. If the human body part is not present in the third sensing data, the orientation information may be determined according to only the first sensing data.
In one embodiment, processor 250 may determine movement information of the user based on the orientation information and the location information. The orientation information may be generated based on the first sensed data, the third sensed data, or a combination of the first sensed data and the third sensed data as described above. The position information may be generated based on the first sensing data, the second sensing data, and the third sensing data as described above. Taking the body part B1 or the body part B2 in fig. 2 as an example, the movement information may be related to a lifting, a point, a kick, a stepping, or a jumping movement.
In another embodiment, the processor 250 may determine the motion information of the user according to the orientation information based on the first sensed data and the adjusted position information based on the first position information and the second position information. Whether or not a human body part is present in the third sensed data, the processor 250 may predict the movement of the user.
In another embodiment, the processor 250 may determine the motion information of the user according to the orientation information based on the first sensed data and the combined position information based on the second sensed data and the third sensed data. This means that when a human body part is present and absent in the third sensed data, motion information may be determined based on the orientation information and the combined position information for two durations.
Taking fig. 4 and 5 as an example, the lifting hand movement of the human body part B4 is determined in fig. 4, and the lowering hand movement is determined in fig. 5. Next, a top-to-bottom swinging motion of the body part B4 is determined.
In one embodiment, the processor 250 may determine the motion information of the user based only on the location information based on the second sensed data. In another embodiment, the processor 250 may determine the motion information of the user based only on the combined location information based on the second sensed data and the third sensed data. In some embodiments, if the human body part is not present in the third sensed data, the processor 250 may determine the motion information of the user according to only the position information based on the second sensed data, and if the human body part is present in the third sensed data, the processor 250 may determine the motion information of the user according to only the position information or the combined position information based on the third sensed data.
The displacement or trajectory of the body part may be tracked, and motion information may be determined based on the displacement or trajectory. Taking fig. 4 and 5 as an example, the human body part B3 moves from top to bottom, and the human body part B4 is determined to swing from top to bottom.
In summary, in the motion tracking system and the motion tracking method for a plurality of operation positions according to the embodiments of the present invention, the motion of the human body part can be tracked based on the signal intensity, the sensing result of the motion sensor and/or the camera image. If the tracked human body part is not in the FOV according to the detection result of the camera image, the signal intensity between the motion sensing devices can be used for compensating the accuracy of the sensing result obtained by using the motion sensor in the position information determination. Furthermore, if the tracked body part is present within the FOV, the camera image may be used to correct the signal strength based position estimation. Therefore, a plurality of tracking methods can be used for different conditions, and the accuracy of the tracking result is improved accordingly.
It will be apparent to those skilled in the art that various modifications and variations can be made in the structure of the present invention without departing from the scope or spirit of the invention. In view of the above, it is intended that the present invention cover modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.

Claims (10)

Translated fromChinese
1.一种运动跟踪方法,适用于运动跟踪系统,其中所述运动跟踪系统包括可穿戴在用户的人体部位上的第一运动感测设备、第二运动感测设备以及第三运动感测设备,且所述运动跟踪方法包括:1. A motion tracking method, applicable to a motion tracking system, wherein the motion tracking system comprises a first motion sensing device, a second motion sensing device and a third motion sensing device wearable on a human body part of a user, and the motion tracking method comprises:基于安置在所述第一运动感测设备、所述第二运动感测设备以及所述第三运动感测设备上的运动传感器获得第一感测数据;obtaining first sensing data based on motion sensors disposed on the first motion sensing device, the second motion sensing device, and the third motion sensing device;基于在所述第一运动感测设备、所述第二运动感测设备与所述第三运动感测设备之间发射的无线信号获得第二感测数据;obtaining second sensing data based on wireless signals transmitted between the first motion sensing device, the second motion sensing device, and the third motion sensing device;由决定因素确定所述用户的运动信息;determining the movement information of the user by a determining factor;基于从图像传感器所捕获的图像获得第三感测数据,obtaining third sensing data based on an image captured from the image sensor,根据所述第一感测数据获得定向信息和第一位置信息;Obtaining orientation information and first position information according to the first sensing data;根据所述第二感测数据获得第二位置信息;obtaining second position information according to the second sensing data;根据所述第三感测数据获得第三位置信息;Obtaining third position information according to the third sensing data;根据所述第一位置信息、所述第二位置信息以及所述第三位置信息获得调整后的位置信息,包括:Obtaining adjusted location information according to the first location information, the second location information, and the third location information includes:根据所述第三位置信息确定第一权重和第二权重;Determine a first weight and a second weight according to the third position information;通过将所述第一权重与所述第一位置信息相乘获得第一参数;Obtaining a first parameter by multiplying the first weight by the first position information;通过将所述第二权重与所述第二位置信息相乘获得第二参数;以及obtaining a second parameter by multiplying the second weight by the second position information; and通过将所述第一参数与所述第二参数相加获得所述调整后的位置信息;以及obtaining the adjusted position information by adding the first parameter to the second parameter; and根据所述定向信息和所述调整后的位置信息确定所述用户的所述运动信息,determining the motion information of the user according to the orientation information and the adjusted position information,其中所述决定因素包括所述第一感测数据、所述第二感测数据和所述第三感测数据,wherein the determining factor includes the first sensing data, the second sensing data and the third sensing data,其中所述用户的所述人体部位存在于所述第三感测数据中。The body part of the user exists in the third sensing data.2.根据权利要求1所述的运动跟踪方法,所述确定所述用户的所述运动信息的步骤包括:2. The motion tracking method according to claim 1, wherein the step of determining the motion information of the user comprises:在第一持续时间内根据所述第二感测数据获得位置信息的第一部位;obtaining a first portion of position information according to the second sensing data within a first duration;在第二持续时间内根据所述第三感测数据获得位置信息的第二部位;obtaining a second portion of position information according to the third sensing data within a second duration;将位置信息的所述第一部位和所述第二部位组合为组合位置信息;以及combining the first part and the second part of the position information into combined position information; and根据所述定向信息和所述组合位置信息确定所述用户的所述运动信息。The motion information of the user is determined based on the orientation information and the combined position information.3.根据权利要求1所述的运动跟踪方法,其中所述第一权重和所述第二权重反复变化。The motion tracking method according to claim 1 , wherein the first weight and the second weight are repeatedly changed.4.根据权利要求1所述的运动跟踪方法,其中响应于所述用户的所述人体部位不存在于所述第三感测数据中,所述第一权重和所述第二权重是固定的。4 . The motion tracking method according to claim 1 , wherein in response to the human body part of the user not existing in the third sensing data, the first weight and the second weight are fixed.5.根据权利要求2所述的运动跟踪方法,其中所述用户的所述人体部位在所述第一持续时间不存在于所述第三感测数据中,且所述用户的所述人体部位在所述第二持续时间存在于所述第三感测数据中。5 . The motion tracking method according to claim 2 , wherein the human body part of the user does not exist in the third sensing data during the first duration, and the human body part of the user exists in the third sensing data during the second duration.6.一种运动跟踪系统,包括:6. A motion tracking system comprising:三个运动感测设备,可穿戴在用户的人体部位上,其中所述运动感测设备中的每一个包括:Three motion sensing devices, wearable on a user's body part, wherein each of the motion sensing devices comprises:无线收发器,发射或接收无线信号;Wireless transceivers, which transmit or receive wireless signals;运动传感器,感测所述用户的所述人体部位中的一个的运动;a motion sensor that senses motion of one of the body parts of the user;图像传感器,获得图像;以及An image sensor to obtain an image; and处理器,被配置以执行:A processor configured to perform:基于所述三个运动感测设备的所述运动传感器获得第一感测数据;obtaining first sensing data based on the motion sensors of the three motion sensing devices;基于在所述三个运动感测设备之间发射的无线信号获得第二感测数据;obtaining second sensing data based on wireless signals transmitted between the three motion sensing devices;由决定因素确定所述用户的运动信息;determining the movement information of the user by a determining factor;基于所述图像获得第三感测数据;obtaining third sensing data based on the image;根据所述第一感测数据获得定向信息和第一位置信息;Obtaining orientation information and first position information according to the first sensing data;根据所述第二感测数据获得第二位置信息;obtaining second position information according to the second sensing data;根据所述第三感测数据获得第三位置信息;Obtaining third position information according to the third sensing data;根据所述第一位置信息、所述第二位置信息以及所述第三位置信息获得调整后的位置信息Obtain adjusted location information according to the first location information, the second location information, and the third location information根据所述定向信息和所述调整后的位置信息确定所述用户的所述运动信息;determining the motion information of the user according to the orientation information and the adjusted position information;根据所述第三位置信息确定第一权重和第二权重;Determine a first weight and a second weight according to the third position information;通过将所述第一权重与所述第一位置信息相乘获得第一参数;Obtaining a first parameter by multiplying the first weight by the first position information;通过将所述第二权重与所述第二位置信息相乘获得第二参数;以及obtaining a second parameter by multiplying the second weight by the second position information; and通过将所述第一参数与所述第二参数相加获得所述调整后的位置信息,The adjusted position information is obtained by adding the first parameter to the second parameter,其中所述决定因素包括所述第一感测数据、所述第二感测数据和所述第三感测数据,wherein the determining factor includes the first sensing data, the second sensing data and the third sensing data,其中所述用户的所述人体部位存在于所述第三感测数据中。The body part of the user exists in the third sensing data.7.根据权利要求6所述的运动跟踪系统,其中所述处理器被配置以执行:7. The motion tracking system of claim 6, wherein the processor is configured to perform:在第一持续时间内根据所述第二感测数据获得位置信息的第一部位;obtaining a first portion of position information according to the second sensing data within a first duration;在第二持续时间内根据所述第三感测数据获得位置信息的第二部位;obtaining a second portion of position information according to the third sensing data within a second duration;将位置信息的所述第一部位和所述第二部位组合为组合位置信息;以及combining the first part and the second part of the position information into combined position information; and根据所述定向信息和所述组合位置信息确定所述用户的所述运动信息。The motion information of the user is determined based on the orientation information and the combined position information.8.根据权利要求6所述的运动跟踪系统,其中所述第一权重和所述第二权重反复变化。The motion tracking system according to claim 6 , wherein the first weight and the second weight are repeatedly changed.9.根据权利要求6所述的运动跟踪系统,其中响应于所述用户的所述人体部位不存在于所述第三感测数据中,所述第一权重和所述第二权重是固定的。9 . The motion tracking system of claim 6 , wherein in response to the human body part of the user not being present in the third sensing data, the first weight and the second weight are fixed.10.根据权利要求7所述的运动跟踪系统,其中所述用户的所述人体部位在所述第一持续时间不存在于所述第三感测数据中,且所述用户的所述人体部位在所述第二持续时间存在于所述第三感测数据中。10 . The motion tracking system according to claim 7 , wherein the human body part of the user does not exist in the third sensing data during the first duration, and the human body part of the user exists in the third sensing data during the second duration.
CN201911249056.XA2019-12-092019-12-09 Motion tracking system and methodActiveCN113029190B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201911249056.XACN113029190B (en)2019-12-092019-12-09 Motion tracking system and method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201911249056.XACN113029190B (en)2019-12-092019-12-09 Motion tracking system and method

Publications (2)

Publication NumberPublication Date
CN113029190A CN113029190A (en)2021-06-25
CN113029190Btrue CN113029190B (en)2024-12-24

Family

ID=76450917

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201911249056.XAActiveCN113029190B (en)2019-12-092019-12-09 Motion tracking system and method

Country Status (1)

CountryLink
CN (1)CN113029190B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
TW201837660A (en)*2017-04-132018-10-16國立政治大學Wearable instant interactive system
CN110275603A (en)*2018-03-132019-09-24脸谱科技有限责任公司Distributed artificial reality system, bracelet equipment and head-mounted display

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR102522502B1 (en)*2016-04-262023-04-17매직 립, 인코포레이티드Electromagnetic tracking with augmented reality systems
CN106682572B (en)*2016-10-122020-09-08纳恩博(北京)科技有限公司Target tracking method and system and first electronic device
AU2017365223B2 (en)*2016-11-252022-07-07Sensoryx AGWearable motion tracking system
US11248949B2 (en)*2017-01-102022-02-15Motorola Mobility LlcWireless hand sensory apparatus for weight monitoring
US11366184B2 (en)*2017-06-132022-06-21Sony Semiconductor Solutions CorporationPosition determination device and method
CN110440801B (en)*2019-07-082021-08-13浙江吉利控股集团有限公司 A method, device and system for acquiring positioning perception information
CN110456905A (en)*2019-07-232019-11-15广东虚拟现实科技有限公司 Location tracking method, device, system and electronic equipment
CN110366104B (en)*2019-08-122021-06-08中南大学湘雅医院Positioning method, device, system, electronic equipment and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
TW201837660A (en)*2017-04-132018-10-16國立政治大學Wearable instant interactive system
CN110275603A (en)*2018-03-132019-09-24脸谱科技有限责任公司Distributed artificial reality system, bracelet equipment and head-mounted display

Also Published As

Publication numberPublication date
CN113029190A (en)2021-06-25

Similar Documents

PublicationPublication DateTitle
US11009941B2 (en)Calibration of measurement units in alignment with a skeleton model to control a computer system
US11083950B2 (en)Information processing apparatus and information processing method
CN107923740B (en)Sensor device, sensor system, and information processing device
US11460912B2 (en)System and method related to data fusing
KR20190094954A (en)Apparatus and method for tracking a movement of eletronic device
US10948978B2 (en)Virtual object operating system and virtual object operating method
US20210132684A1 (en)Human computer interaction system and human computer interaction method
CN113029190B (en) Motion tracking system and method
EP3971683A1 (en)Human body portion tracking method and human body portion tracking system
EP3813018A1 (en)Virtual object operating system and virtual object operating method
CN112712545A (en)Human body part tracking method and human body part tracking system
TWI872180B (en)System and method related to data fusing
TWI737068B (en)Motion tracking system and method
US11783492B2 (en)Human body portion tracking method and human body portion tracking system
EP4016252A1 (en)System and method related to motion tracking
EP3832435A1 (en)Motion tracking system and method
EP4016253A1 (en)System and method related to data fusing
CN114661143A (en)System and method relating to data fusion
US11369866B2 (en)Position tracking apparatus and method
CN114745010A (en)System and method relating to motion tracking
JP2022096724A (en)System and method related to data fusion
JP2022096723A (en)System and method related to motion tracking
TW202225916A (en)Motion tracking system and method
JP2021089691A (en)Action tracking system and method for tracking actions
US20200089940A1 (en)Human behavior understanding system and method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp