BACKGROUNDWearable technologies are electronic devices incorporated into clothing or worn on the body. They are often used to monitor an athlete's movements to improve performance. Wearable devices may include motion sensors such as accelerometers and gyroscopes. They may also include physiologic sensors such as heart rate monitors and temperature sensors.
BRIEF DESCRIPTION OF THE DRAWINGSIn the drawings:
FIG. 1 is a block diagram of a motion sensing system in some examples of the present disclosure.
FIG. 2 illustrates a skeletal model for generating the avatar in some examples of the present disclosure.
FIG. 3 illustrates a motion capture suit ofFIG. 1 in some examples of the present disclosure.
FIG. 4 illustrates a sensor socket fixed to a location on the motion capture suit ofFIG. 1 in some examples of the present disclosure.
FIG. 5 illustrates a sensor just before being inserted in sensor socket in some examples of the present disclosure.
FIG. 6 illustrates a sensor inserted and locked to a sensor socket in some examples of the present disclosure.
FIG. 7 illustrates a sensor set in some examples of the present disclosure.
FIG. 8 illustrates the orientation of a sensor with a gyroscope having a specific orientation for a given activity in some examples of the present disclosure.
FIG. 9 illustrates a sensor socket with a socket base fixed to a motion capture suit and a rotatable bezel around the socket base in some examples of the present disclosure.
FIG. 10 is a block diagram illustrating a sensor in some examples of the present disclosure.
FIG. 11 is a swimlane diagram demonstrating how a pod ofFIG. 1 is configured in some examples of the present disclosure.
FIG. 12 is a flowchart of a method illustrating the operations of the pod ofFIG. 1 in some examples of the present disclosure.
Use of the same reference numbers in different figures indicates similar or identical elements.
DETAILED DESCRIPTIONIn some examples of the present disclosure, a motion sensing system is provided for full or partial body motion capture and reconstruction in a variety of circumstances and environments. The system includes a motion capture suit with sensors located at strategic locations on a user's body. A sensor may be oriented to have its axes of measurement offset from an axis of the great motion (e.g., acceleration or angular velocity) in order to avoid topping out or saturating the sensor. On the motion capture suit, a sensor socket may be rotatable so an inserted sensor's axes of measurement may be adjusted for a particular activity.
Additional sensors may be added to the system for tracking a single body part in greater detail or an extra piece of equipment, such as a golf club. Different types of sensors may be added to the system, such as pressure insoles, heart rate monitors, etc. Sensors with different specifications may be placed in different locations of the body, thereby allowing the same suit to be used for different activities or users of different capabilities.
FIG. 1 is a block diagram of amotion sensing system100 in some examples of the present disclosure.Motion sensing system100 is customizable and configurable according to the action undertaken by a user, as well as the profile and the biometrics of the user.
Motion sensing system100 includes amotion capture suit112 withmotion tracking sensors114 that directly measure movements of theuser wearing suit112 during an activity to be analyzed.Motion capture suit112 may be one piece or may include multiple sections, such as an upper body section or shirt112-1, a lower body section or pants112-2, a cap or hood112-3, and socks or insoles112-4.Motion capture suit112 may be elastic and relatively tight fitting to minimize shifting ofmotion tracking sensors114 relative to the body.Motion tracking sensors114 are located in or onmotion capture suit112 to measure movement of specific body parts. Eachmotion tracking sensor114 may include a combination of accelerometer, gyroscope, and magnetometer, whose raw data outputs may be processed to determine the position, orientation, and movement of the corresponding body part. The raw data outputs include accelerations (m/s2) along the measurement axes of the accelerometer, angular velocity (rad/s) about the measurement axes of the gyroscope, and magnetic field vector components (Gauss) along the measurement axes of the magnetometer.
Motion sensing system100 may include one ormore biofeedback sensors115 that measure physiological functions or attributes of the subject such as the user's heart rate, respiration, blood pressure, or body temperature. For example, a pressure insole in the user's shoe can measure the timing and amount of weight transfer from one foot to another or between the ball and toe of the subject's foot during the measured activity.
The user may employ a piece ofequipment116 during the measured activity.Equipment116 may, for example, be sporting equipment such as a golf club, a tennis or badminton racket, a hockey stick, a baseball or cricket bat, a ball, a puck, or a shuttle that the subject uses during a measured sports activity.Equipment116 may alternatively be a tool, exercise equipment, a crutch, a prosthetic, or any item that a subject is being trained to use. One or moremotion tracking sensors114 may be attached toequipment116 to determine the position, orientation, movement, or bend/flex ofequipment116.
Motion sensing system100 may include anelectronic device120 that provides additional motion data of the subject orequipment116, such as a golf launch monitor that provides swing speed, ball speed, and ball spin rate.
Asensor controller118, also known as a “pod,” is attached to or carried inmotion capture suit112. Pod118 has wired or wireless connections tosensors114 and115. Pod118 processes the raw motion data fromsensors114 to produce geometric data for a skeletal model and metrics calculated from a combination of the raw motion data and the geometric data. Pod118 transmits the geometric data, the metrics, and the biofeedback data to anapp134 on a smart device via wireless or wired connection (e.g., Bluetooth) during or after the measured activity. The smart device may be a smart phone, a tablet computer, a laptop computer, or a desktop computer.App134 generates scores from the geometric data and provides visual feedback in the form of an avatar that shows the movement of the user.
For hardware,pod118 includesprocessor136 andmemory138 for executing and storing the software.Pod118 also includes a RS-485 data bus transceiver (not shown) for communicating withsensors114 and115, a Wi-Fi transceiver (not shown) for communicating with a wireless network to access the Internet, and a Bluetooth transceiver (not shown) for communicating withapp134 on the smart device.
For software,pod118 includes an operating system (OS)124 with abus manager driver126 executed by the processor, abus manager128 executed by the data bus transceiver, and anapplication controller130 that runs on the OS, and a number ofactivity executables132 that detect and analyze actions of different activities (e.g., a golf swing for golf, a bat swing for baseball, and a groundstroke for tennis).Pod118 and any power source may be stored in a pocket ofmotion capture suit112.
FIG. 2 illustrates askeletal model150 for generating the avatar in some examples of the present disclosure.Skeletal model150 is a hierarchical set of joint nodes and limb segments that link the joint nodes. Each limb segment represents a bone, or a fixed bone group, in the human body. The highest joint node in the hierarchy is the root node. In a chain of joint nodes, the joint nodes closer to the root node are higher in the hierarchy than joint nodes further from the root node. Movements ofskeletal model150 are represented by movements of the joint nodes. Assuming the limb segments are rigid bodies, movement of any joint node is represented by a translation (e.g., a vector) and a rotation (e.g., a quaternion) of the joint node, where the rotation of the joint node determines the orientation of the limb segment extending from the joint node. Movement of the root node controls the position and orientation of the skeleton model in a three-dimensional space. Movement of any other joint node in the hierarchy is relative to that node's parent. In particular, all of the descendant joint nodes from the root node form an articulated chain, where the coordinate frame of a child node is always relative to the coordinate frame of its parent node.
FIG. 3 illustratesmotion capture suit112 in some examples of the present disclosure.Suit112 may be made of 75% nylon and 25% elastane to provide a compression fit that reduces wobble from the human body.Suit112 includes multiple sensor networks and each sensor network supports a number ofsensors114. Thus, pod118 (FIG. 1) can configuremotion capture suit112 to use selected networks or even selected sensors.Sensors114 can be added and removed from each sensor network. For example, alow range sensor114 on a leg may be replaced with ahigh range sensor114 on a hand because the focus is on the lower body for a kicking action. Further, if there is great interest in the movement of the spine,additional sensors114 may be added to this area.
In some examples,motion capture suit112 includes (1) a top112-1 with anupper wiring harness302 and (2) pants112-2 with alower wiring harness304. Top112-1 and pants112-2 may be joined by a zipper or snap fasteners to prevent the garment from riding up or down.Upper wiring harness302 andlower wiring harness304 are connected topod118, which is held in a pocket ofmotion capture suit112.
Upper wiring harness302 includes threesensor networks310,312, and314.Lower wiring harness304 includes twosensor networks316 and318. Each sensor network is a chain ofsensors114. The cables linkingsensor sockets402 are flexible and the lengths of each section of the cables are tailored according to the size of the suit/user. Additional slack (in the form of loops) may be added into the wires within the cables so that any pulling of a cable is absorbed by the wires and not by asensor114.
Sensors114 are inertial measurement unit (IMU) sensors.Sensors114 are placed at strategic locations over the body to track each major limb segment while minimizing movements due to contractions of underlying muscle mass. For each limb segment, the correspondingsensor114 is generally located near the distal end and on the outer surface of the limb segment.
Fivesensors114 are placed to track the movements of the pelvis, mid-back, and upper back, left shoulder, and right shoulder.Sensors114 are also located to track the movements of the head and feet. For example,sensors114 may be equipped with hooks to loop over or into a hat and shoes. Approximate proximal-distal locations ofsensors114 are provided Table 1 below.
| TABLE 1 |
|
| Approximate sensor location on |
| each segment |
| | % location on |
| | long axis of |
| Node Name | Description | limbsegment |
|
| | 6 |
| L_Arm | | 78 |
| L_Foot | | 58 |
| L_Forearm | | 82 |
| L_Shank | | 77 |
| L_Thigh | | 84 |
| L_Hand | | 65 |
| L_Shoulder | L_Scap_Sho | | 22 |
| L_Scap_Spi | 27 |
| Pelvis | | 99 |
| R_Arm | | 75 |
| R_Foot | | 56 |
| R_Forearm | | 84 |
| R_Shank | | 77 |
| R_Thigh | | 80 |
| R_Hand | | 71 |
| R_Shoulder | R_Scap_Sho | | 22 |
| R_Scap_Spi | 27 |
| Mid-Back | | 58 |
|
In Table 1, percentages are expressed from proximal to distal end of the corresponding limb segment. For example, L_Arm=60 means thatsensor114 on the left arm is 60% along the length of the upper arm towards the elbow.Sensors114 for the upper and mid-backs are expressed relative to the length of the length of the spine from top to bottom (e.g., from the cervical vertebrae7 to the sacrum).Sensor114 for the left shoulder is located based on L_Scap_Sho and L_Scap_Spi, where L_Scap_Sho is 22% along the imaginary line between the shoulder joints, and L_Scap_Spi is 27% along the length of the spine from top to bottom.Sensor114 for the pelvis is located 99% along the length of the spine from top to bottom.
Sensors114 are inserted into sensor sockets so the sensors can be easily removed and replaced.FIG. 4 illustrates asensor socket402 fixed to a location onsuit112 in some examples of the present disclosure. A hole in the material ofsuit112 is provided for eachsensor socket402. The hole is reinforced with apolyurethane ring403, whichsensor socket402 clamps onto to prevent fabric fraying and movement ofsensor114. While somesensor sockets402 are fixed to suit112, others may be equipped with clips for attachment to hat, shoe, or another apparel.Sensor socket402 is daisy chained to other sensor sockets in a sensor network by cables withinsuit112.
FIG. 5 illustrates asensor114 just before being inserted insensor socket402 in some examples of the present disclosure.Sensor socket402 has a particular arrangement ofcontact sockets404 for receiving contact pins406 onsensor114 so the sensor can only be inserted in the correct orientation into the sensor socket. For example,contact sockets404 and contact pins406 may be arranged in a “V” shape.Sensor socket402 further includesholes408 for receiving cantilever hooks410 onsensor114 that lock the sensor to the sensor socket.FIG. 6 illustratessensor114 inserted and locked tosensor socket402 in some examples of the present disclosure. Therelease sensor114 fromsensor socket402, the user squeezes the ends ofhooks410 and then pull the sensor out from the sensor socket.
When it is desirable to extend a sensor network or add anadditional sensor114 about a location, asingle sensor114 may be replaced by a sensor set.FIG. 7 illustrates asensor set702 in some examples of the present disclosure. Sensor set702 includes abase sensor704 and anextended sensor706.Base sensor704 is configured likesensor114 to fit insensor socket402 at the end of the sensor network.Base sensor704 has acable708 that runs toextended sensor706. Alternatively,sensors704 and706 include wireless transceivers that allow them to communicate wirelessly (e.g., Bluetooth).Extended sensor706 may have an elastic loop orclip710 to fix the sensor on another body part (e.g. fingers) or a piece of equipment (e.g., a golf club).
The interchangeability ofsensors114 means that the internal characteristics of the sensors may be altered or adapted according to the activity. It is possible thatsensors114 may “top out” or saturate. For example, the gyroscope in asensor114 has a maximum range of +/−4000 deg/s for each individual measurement axis (X, Y, Z) but some activities may go above this. In order to make the most of this range,sensor114 may be placed so that the measurement axes of the gyroscope are aligned at 45 degrees to the axis of the segment about which it rotates fastest. For example, in golf, the hands are rotating fastest when the knuckles point downwards towards the ball around the point of contact. If the axis of rotation of the hands aligns directly with one of the measurement axes of the gyroscope insensor114 at this point, the limit of measurement is 4000 deg/s before saturation. However, if the orientation ofsensor114 is such that two of the measurement axes of the gyroscope are at 45 degrees to the axis of rotation, the maximum reading before saturation occurs is increased to 5,656.85 deg/s, which is an increase of 41%. If the measurement all three axes of the gyroscope are orientated so they were equally and maximally different to the rotation axis at maximum speed, the maximum measurement reaches 6,928.20 before saturation, which is an increase of 73%.
FIG. 8 illustrates the orientation of asensor114 with agyroscope802 having a specific orientation for a given activity in some examples of the present disclosure. Assumesensor114 is located on a (forearm)limb segment804 and thelong axis806 of the forearm experiences the greatest angular velocity for the give activity. As described before,sensor114 is located 82 or 84% along thelong axis806 offorearm804 toward (hand)limb segment808. To avoid saturation,gyroscope802 is oriented so at least two of its measurement axes (e.g., twomeasurement axes810 and812) are rotated about 45 degrees (e.g., within 5 to 15 degrees) relative to the axis of the greatest rotation whensensor114 is inserted insensor socket402. In most activities, the axis of greatest rotation is thelong axis806. In some examples, measurement axes810 and812 are located in substantially the same plane as long axis806 (e.g., within 5 to 15 degrees). When present, the accelerometer and the magnetometer may also be oriented insensor114 so their measurement axes are offset from the axes of the greatest corresponding measurements.
Alternatively,sensor114 orsensor socket402 may include a mechanism that allows the sensor to be rotated during adjustment but then fixed during use so as not to introduce artifacts into the measurement signal. Such a mechanism allowssensors114 to be aligned to a given axis or segment. This functionality may be particularly useful when attachingsensors14 to a piece of equipment such as a golf club as the sensor can be aligned to the long axis of the shaft and the face of the head.
FIG. 9 illustrates a sensor socket900 with asocket base902 fixed tomotion capture suit112 and arotatable bezel904 around the socket base in some examples of the present disclosure.Rotatable bezel904 hasholes402 for receiving cantilever hooks410 ofsensor114 so the sensor is attachable to the bezel.Socket base902 has contact arcs906 for electrical connection withcontact pins406 ofsensor114. When secured torotatable bezel904,sensor114 can be rotated and locked (e.g., by a spring-loaded pin) in one (1) degree increments to provide a shift from −90 to +90 degrees of the measurement axes ofsensor114. Alternatively,rotatable bezel904 is fixed tomotion capture suit112 andsocket base902 is rotatable relative to the bezel. This allowssocket base902 to be implemented withpin contact sockets404 instead of contact arcs906 for electrical connection withcontact pins406 ofsensor114.
App134 may determine the angle at which to set the orientation ofsensor114 to the user based on measurements taken from previous performances recorded during use ofmotion capture suit112. Alternatively,app134 may provide the angle based on knowledge of the skill/technique/variation that the user is about to perform. In other words, for certain sports,app134 may have predetermined angle forsensor114 based on statistical data.
Whensensors114 are swapped to a different location,pod118 is able to recognize this due to each sensor having a unique ID. As the calibration data and offsets for each sensor and its components (accelerometer, magnetometer, and gyroscope) are stored both remotely and locally,pod118 can correctly associate this data with theowner sensor114 no matter its location inmotion capture suit112.
WhileIMU type sensors114 are discussed in the present disclosure, motion system100 (FIG. 1) is not limited only to this type of sensor. Due tosensors114 being removable from motion capture suit112 (FIGS. 1 and 3), they may be swapped with ones that incorporate additional functionality in order to suit the activity, purpose, or environment as required.
FIG. 10 is a blockdiagram illustrating sensor114 in some examples of the present disclosure.Sensor114 includes a 6-axis accelerometer/gyroscope1002, a 3-axis magnetometer1004, and an optional 3-axishigh G accelerometer1006.Sensor114 may include additional components. In some examples,sensor114 includes a microphone incorporate voice command and communication functions. In some examples,sensor114 includes a speaker to provide audio feedback, communication functions, and music playback. In some examples,sensor114 includes a light to provide visual feedback. In some examples,sensor114 includes a screen to display information and feedback. In some examples,sensor114 includes an interactive display for displaying interactive feedback. In some examples,sensor114 includes a mobile communication device such as a mobile phone, music player, or a payment device. In some examples,sensor114 includes a proximity sensor to detect the presence of other users. In some examples,sensor114 includes haptic feedback mechanisms to provide haptic feedback. In some examples,sensor114 includes a medication dispenser for administering medication. In some examples,sensor114 includes an environment sensor that detects temperature, humidity, wind speed, air quality, or altitude. In some examples,sensor114 includes a hazards sensor that detects smoke, carbon monoxide, or other harmful gases. In some examples,sensor114 includes an impact or shear force sensor.
FIG. 11 is a swimlane diagram demonstrating howpod118 is configured in some examples of the present disclosure. In step1, upon startup,application controller130 requests for the system configuration ofpod118 from aprovider202 ofmotion sensing system100 over the Internet. The system configuration identifies sensors, activity executables, and other hardware and software components that the user is authorized to use (e.g., by subscription).Application controller130 may connect to the Internet through the Wi-Fi or Bluetooth. In step2,provider202 sends the system configuration toapplication130 to verify and enable the authorized hardware and software for the user.
In step3, an interactive app134 (FIG. 1) on a smart device requests to connect withapplication controller130 over Bluetooth. The smart device may be a laptop, a smart phone, or a tablet. Insteps4 and5,application controller130 andapp134 exchange handshake messages to establish a connection. Instep6,app134 sends a user-selected activity toapplication controller130. In step7,app134 sends a new skill model of the user-selected activity toapplication controller130. The skill model identifies a particular action or a task (e.g., training) that the user will perform. The task may be a repetition of a combination of actions assigned by a coach.
In step8, when an activity executable132 (FIG. 1) for the activity has not been previously downloaded,application controller130 requests theactivity executable132 from the cloud, i.e., fromprovider202 over the Internet.Activity executable132 includes a suit configuration (i.e., sensor configurations) for the user-selected activity and code for detecting actions, recognizing phases in the actions, and extracting metrics the phases. In step9,provider202 sendsactivity executable132 toapplication controller130 over the Internet.
Instep10,application controller130 requestsbus manager driver126 to open a connection tomotion capture suit112. Instep11,bus manager driver126requests bus manager128 to enablemotion capture suit112. In step12,bus manager128 informsbus manager driver126 thatmotion capture suit112 has been turned on. In step13,bus manager driver126 informsapplication controller130 thatmotion capture suit112 has been turned on. Instep14,application controller130 sends the suit configuration for the activity tobus manager driver126. Instep15,bus manager driver126 forwards the suit configurations tobus manager128, which configuresmotion sensors114 accordingly. Instep16,bus manager128 sends a ready status, suit diagnostic information, and an identification (ID) ofmotion capture suit112 andsensors114,115 tobus manager driver116. Instep17,bus manager driver126 forwards the ready status, the suit diagnostic information, and the suit and sensor IDs toapplication controller130. Instep18,application controller130 sends the suit diagnostic information and the suit and sensor IDs toprovider202 for recording keeping and maintenance purposes.
In step19,application controller130 informsapp134 thatapplication controller130 is ready to capture motion data of the new activity. Instep20,application controller130 runsactivity executable132 for the activity. Instep21,app134 instructsapplication controller130 to begin acquiring raw motion data frommotion sensors114 inmotion capture suit112 and on sports equipment106. In some examples,application controller130 also begin acquiring motion data fromelectronic device120. Instep22,application controller130 generates sparse geometric data streams from the raw data streams. Whereas the raw data streams contain motion data at a regular interval (e.g., 1,000 samples per second), the sparse data streams contain motion data when there is sufficient change from a prior value or sufficient time has passed from when the last value was recorded. In other words, one or more portions of a sparse data stream may contain motion data at irregular intervals. Also instep22,activity executable132 recognizes the action and its phases from the raw data streams and the sparse geometric data streams, extracts metrics from the phases, and sends the sparse geometric data streams and the metrics toapp134, which generates scores and the appropriate visual feedback to the user. The visual feedback may be an avatar, which is generated withskeletal model150 inFIG. 3, illustrating the movement of the user.
FIG. 12 is a flowchart of amethod1200 illustrating the operations of pod118 (FIG. 1), more specifically application controller130 (FIG. 1) and activity executable132 (FIG. 1), in some examples of the present disclosure.Method1200, and any method described herein, may be implemented as instructions encoded on a computer-readable medium that is to be executed by a processor a computing system.Method1200, and any method described herein, may include one or more operations, functions, or actions illustrated by one or more blocks. Although the blocks are illustrated in sequential orders, these blocks may also be performed in parallel, and/or in a different order than those described herein. In addition, the various blocks may be combined into fewer blocks, divided into additional blocks, and/or eliminated based upon the desired implementation.Method1200 may start withblock1202.
Inblock1202,application controller130 receives profile and biometrics of the user. Alternatively,application controller130 retrieves from memory the user profile and the user biometrics. The user profile includes gender, date of birth, ethnicity, location, skill level, recent performance data, and health information. The user may input his or her profile through app124 (FIG. 1), which transmits the information toapplication controller130. The user biometrics include passive range of movement at each joint, active range of movement at each joint, strength indicator, anthropometric measurements, resting heart rate, and breathing rates. Biometric measurement may be taken at point of sale ofpod118 and recorded in the memory of thepod118. Alternatively, the user may input his or her biometrics throughapp124, which transmits the information toapplication controller130. The user biometrics may be periodically updated by usingapp124 or throughprovider202 over the Internet.Block1202 may be followed byblock1204.
Inblock304,application controller130 receives the user-selected activity and action fromapp124. This corresponds tosteps6 and7 ofmethod1100 inFIG. 11. Alternatively,application controller130 retrieves the last selected activity and action from memory.
As mentioned previously, an action may be a physical skill, a technique of a skill, a variation of a technique, or a pose. A skill is defined as a movement that is part of an overarching activity (movement sequence, relate to medical or medical condition health) or a sport. For example, the golf swing is at the core of the game of golf. Nearly all golf swings have some aspects in common (e.g. using a club, a backswing, a downswing, a follow-through), which may be used to specify how to analyze a skill and, in particular, how to detect and analyze the skill to provide feedback.
The skill can be further specified according to a technique of the skill, the equipment being used, and a variation of the technique. For golf, the technique may be a type of shot, such as a drive, approach, chip, or putt, the equipment may be a type of golf club, such as a driver, 3 wood, or 7 iron, and the variation of the technique may be a shot shaping, such as straight, fade, draw, high, or low. For tennis, the technique may be a type of groundstroke, such as a forehand, backhand, volley, or serve, the equipment may be a type or size of tennis racket, such as stiffness, weight, or head size, and the variation of the technique may be a ball spin, such as topspin, flat, or slice. Specifying such information allowsactivity executable132 to better identify when a user completes a skill.
Block1204 may be followed byblock1206.
Inblock1206,application controller130 configuresmotion sensors114 for detecting the action. This corresponds to step14 inmethod1100 ofFIG. 11. For example,application controller130 turns on a number ofmotion sensors114 and sets their sampling rates for detecting the action. As described above,motion sensors114 may be part of motion capture suit112 (FIG. 1) and equipment106.Block1206 may be followed byblock1208.
Inblock1208,application controller130 receives time series of raw motion data (raw data streams) from correspondingmotion sensors114.Applicant controller130 may also receive time series of biofeedback data (biofeedback data stream) frombiofeedback sensor115.Application controller130 may further receive time series of additional motion data (additional motion data stream) from electronic device120 (FIG. 1).Block1208 may be followed byblock1210.
Inblock1210,application controller130 generates time series of sparse geometric data (sparse geometric data streams) from the raw data streams.Block1210 may be followed byblock1212.
In blocks1212 and314,activity executable132 performs action identification (ID). Inblock1212,activity executable132 performs raw data action ID to detect the action in time windows of the raw data streams.Activity executable132 may use different thresholds anddifferent motion sensors114 or combinations ofmotion sensors114 to identify different skills and different techniques. As this process uses raw motion data, it allows faster data processing than using more processed (fused) data. To improve detection,activity executable132 may also use biofeedback data stream frombiofeedback sensor115 and the additional motion data stream fromelectronic device120.Activity executable132 may modify the identification process based on the user profile and the user biometrics received or retrieved inblock302.Block1212 may be followed byblock1214.
Inblock1214,activity executable132 performs geometric data action ID to detect the action in the time windows identified in the raw data action ID.Activity executable132 performs geometric data action ID based on the sparse geometric data streams in the identified time windows. To improve detection,activity executable132 may also use the raw data streams frommotion sensor114, biofeedback data stream frombiofeedback sensor115 and the additional motion data stream fromelectronic device120.Activity executable132 may modify the identification process based on the user profile and the user biometrics received or retrieved inblock302.Block1214 may be followed byblock1216.
Inblock1216,activity executable132 performs phase ID to detect phases of the detected action in the time windows identified in the geometric data action ID.Activity executable132 performs phase ID based on the sparse geometric data streams in the identified time windows. To improve detection,activity executable132 may also use the raw data streams frommotion sensor114, biofeedback data stream frombiofeedback sensor115 and the additional motion data stream fromelectronic device120.Activity executable132 may modify the phase identification based on the user profile and the user biometrics.Block1216 may be followed byblock1218.
Inblock1218,activity executable132 determines metrics from the phases identified the phase ID.Activity executable132 extracts the metrics from on the sparse geometric data in the identified phases.Activity executable132 may also extra the metrics from on the raw data stream frommotion sensor114, biofeedback data stream frombiofeedback sensor115, and the additional motion data stream fromelectronic device120.Activity executable132 may modify the metrics being detected based on the user profile and the user biometrics.Block1218 may be followed byblock1220.
Inblock1220,app124 determines scores based on the metrics received frompod118.App124 may modify the scoring based on the user profile and the user biometrics and according to preferences of the user or a coach. Details of block320 are described later.Block1220 may be followed byblock1222.
Inblock1222,app124 prioritizes feedback by applying weights to the scores, summing groups of the weighted scores to generate group summary scores, applying weights to the group summary scores, summing supergroups of the weighted group summary scores to generate supergroup summary scores, and generating a hierarchical structure based on the group summary scores and the supergroup summary scores.Block1222 may be followed byblock1224.
Inblock1224,app124 uses the sparse geometric data streams of the detected action in identified windows to create and animate the avatar.App124 may use the hierarchical structure of the scores to create a visual comparison between the avatar and an optimum performance.App124 may enhance the visual comparison by indicating angle or distance notations based on the scores to highlight areas of interest.App124 may also playback media based on the hierarchical structure of the scores.
Various other adaptations and combinations of features of the examples disclosed are within the scope of the invention. Numerous examples are encompassed by the following claims.