Movatterモバイル変換


[0]ホーム

URL:


CN113255622A - System and method for intelligently identifying sit-up action posture completion condition - Google Patents

System and method for intelligently identifying sit-up action posture completion condition
Download PDF

Info

Publication number
CN113255622A
CN113255622ACN202110792620.3ACN202110792620ACN113255622ACN 113255622 ACN113255622 ACN 113255622ACN 202110792620 ACN202110792620 ACN 202110792620ACN 113255622 ACN113255622 ACN 113255622A
Authority
CN
China
Prior art keywords
sit
stage
action
posture
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110792620.3A
Other languages
Chinese (zh)
Other versions
CN113255622B (en
Inventor
林平
李瀚懿
丁观莲
陈天宜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
One Body Technology Co.,Ltd.
Original Assignee
Beijing Yiti Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yiti Technology Co ltdfiledCriticalBeijing Yiti Technology Co ltd
Priority to CN202110792620.3ApriorityCriticalpatent/CN113255622B/en
Publication of CN113255622ApublicationCriticalpatent/CN113255622A/en
Application grantedgrantedCritical
Publication of CN113255622BpublicationCriticalpatent/CN113255622B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a system and a method for intelligently identifying the completion status of sit-up action gestures, which comprises the following steps: setting a key angle formed by the body related part corresponding to the action posture; comparing key angles in two adjacent frames of images in the test video, and identifying the comparison result of the key angles; and connecting the identifiers in series to form a result list, and obtaining the completion condition of the action posture according to the change of the identifiers in the result list. According to the invention, the key angle of the front frame image and the back frame image in the video is compared to identify the descending stage and the ascending stage of the body, so that the completion condition of the action posture is identified, a recognition model does not need to be trained, the video images can be used for direct recognition, the algorithm is simplified, and the deployment is rapid and convenient.

Description

System and method for intelligently identifying sit-up action posture completion condition
Technical Field
The invention relates to the technical field of image intelligent recognition, in particular to a system and a method for intelligently recognizing a sit-up action posture completion condition.
Background
Sit-up is one of the most common ways of exercising the body (strength training), as sit-up exercises are simple and easy to implement and are not limited by the field. Therefore, the training aid is a necessary subject for military training and becomes an important subject for strength training and assessment of schools at all levels.
In the process of sit-up exercise or examination, the judgment of whether the action is standard or not and the counting are important for the exercise or examination result.
With the development and progress of AI technology, the automatic identification and counting of sit-up has begun to replace manual counting and is gradually widely used. For example: the chinese invention patent CN111368810B discloses a sit-up detection system and method based on human body and bone key point identification, which detects the change of human body frame shape in continuous frames and the change of the included angle between the straight line formed by the waist key point and the shoulder key point and the horizontal position of the human body bone key point to comprehensively judge whether the person lies down, sits up or sits up in the process from lying down to sitting up, thereby completing the sit-up detection of the person to be tested within a fixed time. However, there are the following problems:
1. the scheme is realized based on two parameters, one is that the line of the hip-shoulder skeleton key points of the tested person forms an included angle V with the horizontal line; the other is the angle V' between the diagonal line and the bottom edge of the human body frame. The recognition error of the human body frame is large, so that the recognition result is not accurate enough.
2. The sit-up behavior recognition module that this scheme adopted, sit-up behavior recognition deep learning model training process based on attention mechanism, LSTM does: collecting sit-up video images of testees of different ages, sexes and statures and marking the sit-up video images as positive samples, and simultaneously collecting some non-sit-up videos and marking the non-sit-up videos as negative samples; constructing an end-to-end network with a double-layer structure by finely adjusting corresponding parameters according to a currently disclosed attention mechanism, namely LSTM; and inputting the video frame sequence of the model, and outputting whether the current frame sequence end is a sit-up behavior. The training of the model needs a large amount of sample data, and the workload at the early stage is large.
In view of this, it is necessary to improve the existing intelligent recognition algorithm for the sit-up gesture completion status to improve the recognition accuracy, reduce the workload in the early stage, and facilitate the deployment of the system.
Disclosure of Invention
In view of the above-mentioned defects, the technical problem to be solved by the present invention is to provide a system and a method for intelligently identifying the sit-up gesture completion status, so as to solve the problems that the identification result in the prior art is not accurate enough, a large amount of sample data training models are required at the early stage, and the workload is large.
Therefore, the invention provides a method for intelligently identifying the completion condition of the sit-up action posture, which comprises the following steps:
dividing the standard action posture for completing one sit-up into a body ascending stage and a body descending stage;
setting an included angle formed by a shoulder joint, a waist joint and a knee joint as a key angle;
comparing key angles in two adjacent frames of images in front and back of a test video, identifying comparison results of the key angles in the two adjacent frames of images in front and back, and connecting the identifications in series to form a result list;
traversing the result list by using a sliding window, and replacing all identifiers in the sliding window with identifiers with the largest number, wherein the sliding window at least comprises five frames;
identifying a body ascending stage and a body descending stage for obtaining the sit-up according to the change of the identification in the result list;
and recognizing the movement posture of the sit-up according to the alternation of the body ascending stage and the body descending stage of the sit-up obtained by recognition.
In the above method, preferably, the identifier is represented by a numeral 0 or 1.
In the above method, preferably, whether the body raising motion is completed is determined by detecting whether the key angle is smaller than a first threshold, and a first completion flag is generated when the body raising motion is completed;
judging whether the body descending motion is finished or not by detecting whether the key angle is larger than a second threshold value or not, and generating a second finishing mark when the body descending motion is finished;
counting according to the first completion mark and the second completion mark;
the first threshold value is an included angle formed by a shoulder joint, a waist joint and a knee joint when the sit-up body is in the highest point posture; the second threshold value is an included angle formed by a shoulder joint, a waist joint and a knee joint when the sit-up starts to be in the posture.
In the above method, preferably, after one consecutive pass of the first completion flag and the second completion flag, the correct count is incremented by 1, otherwise, the error count is incremented by 1.
In the above method, it is preferable that the rising motion is determined to be correct by detecting whether or not the leg knees are bent, whether or not the elbows touch the knees, and whether or not the hands are away from the shoulders.
In the above method, preferably, when the posture of the sit-up action of the tester changes to the initial stage, a timestamp of the test video is recorded and associated with each count, the count generates a drop-down list, and by clicking the count in the drop-down list, the test video guided by the timestamp is jumped to for playback.
The invention also provides a system for intelligently identifying the sit-up action posture completion condition, which comprises an image acquisition device for acquiring the test video and an action posture identification device, wherein the action posture identification device comprises:
the identification module is used for comparing key angles in two adjacent frames of images in the test video, identifying the comparison result of the key angles in the two frames of images, and connecting the identifications in series to form a result list; wherein the critical angle is an included angle formed by a shoulder joint, a waist joint and a knee joint;
the lifting stage identification module is used for identifying and obtaining a body descending stage or a body ascending stage of the sit-up according to the change of the identification in the result list; the movement posture for completing the sit-up is divided into a body descending stage and a body ascending stage in advance;
the correction module is used for traversing the result list by utilizing a sliding window and replacing all the identifiers in the sliding window with the identifiers with the largest number, wherein the sliding window at least comprises five frames;
and the motion posture identification module is used for identifying the motion posture of the sit-up according to the alternation of the descending stage or the ascending stage of the body obtained by identification.
In the above system, preferably, further comprising a counting module,
judging whether the ascending action of the sit-up is finished or not by detecting whether the key angle is smaller than a first threshold value or not, and generating a first finishing mark when the ascending action is finished;
judging whether the descending action of the sit-up is finished or not by detecting whether the key angle is larger than a second threshold value or not, and generating a second finishing mark when the descending action is correct;
counting is performed according to the first completion flag and the second completion flag.
In the above system, preferably, the system further includes a timestamp association module, when the posture of the sit-up action of the tester changes to the initial stage, the timestamp of the test video is recorded, and the timestamp is associated with each count, the count generates a drop-down list, and the test video guided by the timestamp is skipped to for playback by clicking the count in the drop-down list.
According to the technical scheme, the system and the method for intelligently identifying the sit-up action posture completion condition solve the problem that the identification result in the prior art is not accurate enough. Compared with the prior art, the invention has the following beneficial effects:
the key angles of the front frame image and the rear frame image in the video are compared to identify the descending stage and the ascending stage of the body, so that the finishing condition of the sit-up action posture is identified, a recognition model does not need to be trained, the video images can be used for direct recognition, the algorithm is simplified, and the deployment is rapid and convenient.
In addition, the result list is traversed by using the sliding window, and all the identifiers in the sliding window are replaced by the identifiers with the largest number, so that the influence of data oscillation is reduced, and the identification result is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments of the present invention or the prior art will be briefly described and explained. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a flowchart of a method for intelligently identifying a sit-up gesture completion status according to the present invention;
FIG. 2 is a schematic diagram of a standard sit-up gesture completion process;
FIG. 3 is a schematic view of the critical angles formed by the shoulder, waist and knee joints;
FIG. 4 is a schematic diagram illustrating the comparison and identification of key angles between the t +1 th frame and the t th frame;
FIG. 5 is a schematic diagram of sliding window calibration data in the present application;
FIG. 6 is a schematic diagram of a system for recognizing the completion of a sit-up gesture according to the present invention;
FIG. 7 is a schematic diagram of the motion gesture recognition apparatus according to the present invention;
FIG. 8 is a diagram of an identification frame in the present invention, showing a count and an AR auxiliary line;
fig. 9 is a screenshot of the present invention in practical application.
Detailed Description
The technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is to be understood that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without any inventive step, are within the scope of the present invention.
The realization principle of the invention is as follows:
setting a key angle formed by the relevant body parts corresponding to the sit-up action posture;
comparing key angles in two adjacent frames of images in front and back of a test video, and identifying comparison results of the key angles;
and connecting the identifiers in series to form a result list, and obtaining the completion condition of the action posture according to the change of the identifiers in the result list.
According to the scheme provided by the invention, the key angles of the front frame image and the rear frame image in the video are compared to identify the descending stage and the ascending stage of the body, so that the completion condition of the action posture is identified, a recognition model is not required to be trained, the video images can be used for direct recognition, the algorithm is simplified, and the deployment is rapid and convenient.
In order to make the technical solution and implementation of the present invention more clearly explained and illustrated, several preferred embodiments for implementing the technical solution of the present invention are described below.
It should be noted that the terms of orientation such as "inside, outside", "front, back" and "left and right" are used herein as reference objects, and it is obvious that the use of the corresponding terms of orientation does not limit the scope of protection of the present invention.
A complete standard sit-up gesture includes: an initial posture, a body-up phase, a peak, a body-down phase and a return to initial posture.
Starting posture: the upper body is tilted backwards until the scapula touches the floor mat, the knees are bent to about 90 degrees, the feet are horizontally placed on the floor mat, and the two hands are overlapped and placed in front of the chest and on the two shoulders or placed behind the brain.
A body ascending stage: tightening the abdominal muscles, slowly lifting the head, then lifting the upper body, wherein the two soles must be always tightly attached to the ground in the whole process, and the eyes watch the knees to contract the abdominal muscles until the upper body forms an angle of 90 degrees with the thighs or the elbows touch or exceed the knees.
A body descending stage: after reaching the highest point, keeping for a certain time, then slowly lying the upper body back until the body lies on the floor mat, and returning to the initial posture again.
The key to sit-up action comprises: the upper body leans backwards until the scapula touches the floor, the upper body bends forwards, the lower jaw is tightened, the upper body bends forwards until the two elbows touch the knee or thigh part at the same time, and the two arms always keep holding the shoulders crosswise in front of the chest and in both hands.
Common errors: when the user lies on the back, the back of the two shoulders does not touch the cushion, when the user bends, the elbows do not touch the knees, the two hands do not hold the heads, the two hands are away from the shoulders, the knee joints do not bend to 90 degrees, the user can sit up by using the elbow supporting cushion or the lifting force of the buttocks, and the like, and when the wrong postures occur, the counting is not carried out.
And (5) the condition of resting on the mat appears, and the examination is finished.
Detailed description of the preferred embodiment 1
Referring to fig. 1, fig. 1 is a flowchart of a method for intelligently recognizing a sit-up gesture completion status provided by the present invention, the method includes the following steps:
step 110, dividing the standard movement posture for completing one sit-up into two stages, namely a body ascending stage and a body descending stage.
As shown in fig. 2, the following steps are performed from top to bottom: an initial posture, a body up phase, a peak, a body down phase, and a return to initial posture.
Therefore, the sit-up action passes through a body ascending stage and a body descending stage from the starting posture to the returning starting posture, and the sit-up action posture is recognized based on the recognition of the body ascending stage and the body descending stage.
And step 120, setting an included angle formed by the shoulder joint, the waist joint and the knee joint as a key angle.
As shown in fig. 3, the critical angle formed by the shoulder, waist and knee joints is angle a in fig. 3.
The identification of the key angle is realized based on the human skeleton identification technology (human posture estimation algorithm) in the prior art, for example: the scheme of the invention can adopt the human body bone recognition technology to realize an action posture recognition algorithm.
Step 130, capturing a test video of the ongoing sit-up, or importing and playing a test video of an already existing sit-up. From the initial posture of the sit-up, the sizes of key angles in two adjacent frames of images in front and back of a test video are compared in real time, the comparison results of the key angles are identified, and the identification is connected in series to form a result list.
For the sake of calculation, the comparison result of the key angle may be identified by using a number 0 or 1.
If the key angle of the two adjacent frames of images in the next frame of image is smaller than the key angle of the image in the previous frame of image, marking the comparison result as 1, and indicating that the body is in a rising stage; on the contrary, if the key angle in the next frame image is larger than the key angle in the previous frame image, the comparison result is marked as 0, which indicates that the body descending stage is in.
As shown in FIG. 4, the critical angle A in the t +1 th framet+1Less than the critical angle A in the tth frametI.e. At+1<AtIf so, marking the comparison result as 1, and indicating that the body is in a rising stage; if A ist+1>AtThe comparison result is marked as 0, indicating that it is in a body descent stage.
Therefore, assuming that the motion posture standard of the sit-up performed by the current person is good, the detection environment is very good, and the data is completely accurate, the motion posture of the sit-up goes through a body ascending stage and a body descending stage, and a result list (11111111110000000000) is formed after identification.
In step 140, the body up phase and the body down phase of the sit-up are identified according to the change identified in the result list.
For example, in the result list [ 11111111110000000000 ], the first 10 bits are all 1, the last 10 bits are all 0, and the 10 th to 11 th bits are changed from 1 to 0, so that the motion before the 10 th bit is a body-up stage, the motion after the 11 th bit is a body-down stage, and according to the number of bits in the result list, a certain frame of picture in the test video can be corresponded.
And 150, recognizing the movement posture of the sit-up according to the alternation of the body ascending stage and the body descending stage of the sit-up obtained by recognition.
Since the test video is a continuous process, the result list will also be continuous, for example [ 1111111111000000000011111111110000000000 … … ], so that the motion posture of the sit-up can be identified and counted accordingly in the whole test video according to the alternation of the identification of the body ascending phase and the body descending phase.
For example, the above-mentioned consecutive result list would count as 2, indicating that two sit-ups have been performed.
In the process of identifying the test video, certain accidents may be caused due to the influence of factors such as video jitter and ambient light variation, so that data may oscillate unexpectedly, for example, the generated result list is [ 10101111110010010000 ], and therefore, the present invention further provides a method for eliminating the unexpected oscillation data, and the specific method is as follows:
using a sliding window with a smaller number of frames, traversing the result list, and uniformly replacing all identifiers in the sliding window with the identifiers with the largest number, where the size of the sliding window may be set based on the frame rate of the camera and other conditions, usually at least five frames are defined, and the step size of the sliding window is half of the sliding window and is an integer, for example, if the size of the sliding window is five frames, the step size is 3 frames.
As shown in fig. 5, for example, in the result list, through the sliding window, in the first sliding window, there are three 1's and two 0's, and therefore, two 0's are replaced with 1's, so that the result list is converted to [ 11111111110000000000 ], and then the identification division of the body ascending and body descending stages is performed through the converted result list.
In the scheme of the invention, a highest included angle threshold and a lowest included angle threshold are respectively set at the highest point and the lowest point of the action and are respectively recorded as a first threshold and a second threshold, and whether the body descending and body ascending stages are correctly completed is judged according to the comparison between the key angle and the highest included angle threshold or the lowest included angle threshold.
For example: in the body rising stage, if the key angle is less than or equal to the maximum included angle threshold value, the rising action is finished; otherwise, the ascending action is not finished, if the ascending action is not finished, and then the body descending stage is entered, the posture of the sit-up action is wrong, the correct counting is not performed, and the wrong counting is increased by 1.
In the body descending stage, if the key angle is larger than or equal to the minimum included angle threshold value, the descending action is finished; otherwise, the descending action is not finished, if the descending action is not finished and then the body ascending stage is entered, the posture of the sit-up action is wrong, the correct counting is not performed, and the wrong counting is increased by 1.
After one continuous body rising stage and one continuous body falling stage, the counting period is set as one counting period, and 1 is added to the counting period after one continuous rising motion completion and one continuous falling motion completion.
The first threshold and the second threshold are usually set to 50 degrees and 120 degrees, but the first threshold and the second threshold may be manually modified to adapt to different individuals in the solution of the present invention.
According to the scheme provided by the invention, in order to improve the accuracy of sit-up action posture recognition to the maximum extent, two important parameters, namely a first threshold and a second threshold, can be automatically adjusted. The specific method comprises the following steps:
the method comprises the steps of simulating by utilizing the length and proportion models of the height, the big arm, the small arm, the trunk, the thigh and the shank of a human body in advance, and calculating a first threshold value and a second threshold value which correspond to the most reasonable values; the method can be obtained through simulation in a drawing mode and can also be obtained through big data modeling.
Calculating the height of the tester according to the bone recognition data of the tester obtained by the human posture estimation algorithm and the distance between the image acquisition device and the tester on site;
and automatically matching the most reasonable first threshold value and the second threshold value according to the lengths and proportions of the upper arm, the lower arm, the trunk, the thigh and the lower leg of the tester obtained by the human posture estimation algorithm.
Therefore, the judgment results of the completion states of the body ascending and body descending actions can be more accurate, and the accuracy of action posture recognition in the scheme of the invention is far higher than that in the prior art.
The scheme provided by the invention can also detect common errors and judge by combining the connection of the key points of the skeleton and the motion stage of the body, for example, in the rising stage of the body, whether the action posture is standard or not is judged by detecting the bending included angle of the knees and the knees of the legs, whether the elbows touch the knees, whether the hands leave the shoulders and the like, and the wrong counting is added with 1 for the action posture which is not standard.
The above detections are obtained by identifying the corresponding included angles and the corresponding connection lines of the bone points are overlapped, and the algorithm is simpler than that of the previous detection, and is not described herein again.
According to the method, when the sit-up action posture of a tester changes to be at an initial stage, a timestamp (time on the time axis of the current video) of a test video is recorded, the timestamp is associated with each count, the count generates a drop-down list, and the test video guided by the timestamp is skipped to for playback by clicking the count in the drop-down list.
Specific example 2
The embodiment 2 of the present invention provides a system for intelligently identifying the state of completion of the sit-up action posture, as shown in fig. 6, the system includes animage acquisition device 10 for acquiring a test video, an actionposture identification device 20, adisplay device 30 and aprompt device 40.
The motiongesture recognition device 20 is provided with the above-mentioned algorithm for intelligently recognizing the completion status of the sit-up motion gesture. Specifically, the motiongesture recognition apparatus 20 includes:
the identification module 21 is configured to compare key angles in two adjacent frames of images in the test video, and identify a comparison result of the key angles in the two frames of images. After identification, the identifications are connected in series to form a result list; wherein, the key angle is the included angle formed by the shoulder, the elbow and the wrist.
A lifting stage identification module 22, configured to identify a body descending stage or a body ascending stage of the sit-up according to a change of the identifier in the result list; the movement posture for completing one sit-up is divided into two stages, namely a body descending stage and a body ascending stage.
And the motion gesture recognition module 23 is configured to recognize the motion gesture of the sit-up according to the alternation of the body descending stage or the body ascending stage obtained through recognition.
On the basis, the action gesture recognition device further comprises a correction module 24, which is used for traversing the result list by using a sliding window and replacing all identifiers in the sliding window with the identifiers with the largest number, wherein the sliding window at least comprises five frames.
On this basis, the motion gesture recognition apparatus further includes a counting module 25.
The counting module 25 determines whether the ascending motion of the sit-up is completed by detecting whether the key angle is smaller than a first threshold, and generates an ascending completion flag when the ascending motion is completed; judging whether the descending action of the sit-up is finished or not by detecting whether the key angle is larger than a second threshold value or not, and generating a descending completion mark when the descending action is finished; counting is performed according to the rising completion flag and the falling completion flag.
On the basis, the sit-up exercise device further comprises a timestamp association module, when the sit-up exercise posture of the tester changes to be in the initial stage, the timestamp (the moment on the time axis of the current video) of the test video is recorded, the timestamp is associated with each count, the count generates a drop-down list, and the test video guided by the timestamp is jumped to for playback by clicking the count in the drop-down list.
In the system deployment process, in order to make the detection result more accurate and eliminate the influence of environmental factors, the invention is further provided with an AR (augmented reality) auxiliary line, as shown in fig. 7, the AR auxiliary line includes:
and the positionauxiliary line 31 is used for determining that the tester moves to a position recommended by the tester for testing or training, the position auxiliary line is a line, positions needing to be placed, such as a sports carpet, a yoga carpet, a training mat, a sports mat, an anti-skid sports mat and the like, are marked on the display device, and the side edge of the sports mat is overlapped with the position auxiliary line, so that the identification accuracy is improved.
The test areaauxiliary line 32 is a rectangular frame and is used for setting a test area, reasoning and judging bone key points only in the test area, so that the calculation amount is reduced, meanwhile, the bone key points are connected to form key corners and can also be displayed, and the auxiliary information can be compared to help correct the motion posture.
By combining the description of the above specific embodiments, the system and method for intelligently identifying the completion status of the sit-up action posture provided by the invention have the following advantages compared with the prior art:
firstly, the body ascending stage and the body descending stage are identified by comparing the key angles of the front frame image and the back frame image in the video, the finishing condition of the sit-up action posture is identified, a recognition model does not need to be trained, the video images can be used for direct recognition, the algorithm is simplified, and the deployment is rapid and convenient.
And secondly, the comparison result of the key angle is identified by adopting the numbers 1 and 0, and compared with the prior art of directly comparing the included angle value, the subsequent data processing algorithm is simplified, and the data processing efficiency is improved.
Thirdly, based on the 1 and 0 identifiers, unexpected oscillation data is eliminated through a sliding window algorithm, and the accuracy of recognition is improved.
Fourthly, by arranging the AR auxiliary line, on one hand, the identification accuracy is improved, on the other hand, the calculated amount is prevented from being increased due to the fact that other people accidentally enter the video, and the efficiency is improved.
Fifthly, the whole system is high in convenience, can operate without accessing the Internet, and is rapid and convenient to deploy.
Finally, it should also be noted that the terms "comprises," "comprising," or any other variation thereof, as used herein, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The present invention is not limited to the above-mentioned preferred embodiments, and any structural changes made under the teaching of the present invention shall fall within the scope of the present invention, which is similar or similar to the technical solutions of the present invention.

Claims (9)

CN202110792620.3A2021-07-142021-07-14System and method for intelligently identifying sit-up action posture completion conditionActiveCN113255622B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110792620.3ACN113255622B (en)2021-07-142021-07-14System and method for intelligently identifying sit-up action posture completion condition

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110792620.3ACN113255622B (en)2021-07-142021-07-14System and method for intelligently identifying sit-up action posture completion condition

Publications (2)

Publication NumberPublication Date
CN113255622Atrue CN113255622A (en)2021-08-13
CN113255622B CN113255622B (en)2021-09-21

Family

ID=77191209

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110792620.3AActiveCN113255622B (en)2021-07-142021-07-14System and method for intelligently identifying sit-up action posture completion condition

Country Status (1)

CountryLink
CN (1)CN113255622B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113569828A (en)*2021-09-272021-10-29南昌嘉研科技有限公司Human body posture recognition method, system, storage medium and equipment
CN114255196A (en)*2021-11-152022-03-29安徽淘云科技有限公司Display method and related equipment thereof
CN114708541A (en)*2022-04-272022-07-05北京市商汤科技开发有限公司Physical fitness test method and device, computer equipment and storage medium
JP7169718B1 (en)2021-11-122022-11-11株式会社エクサウィザーズ Information processing method, device and program
CN115590504A (en)*2022-09-302023-01-13科大讯飞股份有限公司(Cn)Motion evaluation method and device, electronic equipment and storage medium
CN115953834A (en)*2022-12-162023-04-11重庆邮电大学Multi-head attention posture estimation method and detection system for sit-up
CN117197887A (en)*2023-08-012023-12-08戈迪斯(杭州)智能技术有限公司Sports item counting method based on deep learning network
CN118503858A (en)*2024-07-112024-08-16南京陆加壹智能科技有限公司 A sit-up intelligent testing method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101964047A (en)*2009-07-222011-02-02深圳泰山在线科技有限公司Multiple trace point-based human body action recognition method
CN105913045A (en)*2016-05-092016-08-31深圳泰山体育科技股份有限公司Method and system for counting of sit-up test
CN108564596A (en)*2018-03-012018-09-21南京邮电大学A kind of the intelligence comparison analysis system and method for golf video
CN111104816A (en)*2018-10-252020-05-05杭州海康威视数字技术股份有限公司Target object posture recognition method and device and camera
CN111368810A (en)*2020-05-262020-07-03西南交通大学 Sit-up detection system and method based on human body and skeleton key point recognition
US20200372288A1 (en)*2014-08-112020-11-26Arizona Board Of Regents On Behalf Of Arizona State UniversitySystems and methods for non-contact tracking and analysis of physical activity using imaging
CN112870641A (en)*2021-01-202021-06-01岭南师范学院Sit-up exercise information management system based on Internet of things and detection method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101964047A (en)*2009-07-222011-02-02深圳泰山在线科技有限公司Multiple trace point-based human body action recognition method
US20200372288A1 (en)*2014-08-112020-11-26Arizona Board Of Regents On Behalf Of Arizona State UniversitySystems and methods for non-contact tracking and analysis of physical activity using imaging
CN105913045A (en)*2016-05-092016-08-31深圳泰山体育科技股份有限公司Method and system for counting of sit-up test
CN108564596A (en)*2018-03-012018-09-21南京邮电大学A kind of the intelligence comparison analysis system and method for golf video
CN111104816A (en)*2018-10-252020-05-05杭州海康威视数字技术股份有限公司Target object posture recognition method and device and camera
CN111368810A (en)*2020-05-262020-07-03西南交通大学 Sit-up detection system and method based on human body and skeleton key point recognition
CN112870641A (en)*2021-01-202021-06-01岭南师范学院Sit-up exercise information management system based on Internet of things and detection method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
编程玩家俱乐部: "Python玩人工智能:你的仰卧起坐达标了吗?", 《CSDN》*

Cited By (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113569828A (en)*2021-09-272021-10-29南昌嘉研科技有限公司Human body posture recognition method, system, storage medium and equipment
JP7169718B1 (en)2021-11-122022-11-11株式会社エクサウィザーズ Information processing method, device and program
JP2023072148A (en)*2021-11-122023-05-24株式会社エクサウィザーズ Information processing method, device and program
CN114255196A (en)*2021-11-152022-03-29安徽淘云科技有限公司Display method and related equipment thereof
CN114708541A (en)*2022-04-272022-07-05北京市商汤科技开发有限公司Physical fitness test method and device, computer equipment and storage medium
CN115590504A (en)*2022-09-302023-01-13科大讯飞股份有限公司(Cn)Motion evaluation method and device, electronic equipment and storage medium
CN115590504B (en)*2022-09-302025-05-20科大讯飞股份有限公司Motion evaluation method and device, electronic equipment and storage medium
CN115953834A (en)*2022-12-162023-04-11重庆邮电大学Multi-head attention posture estimation method and detection system for sit-up
CN117197887A (en)*2023-08-012023-12-08戈迪斯(杭州)智能技术有限公司Sports item counting method based on deep learning network
CN118503858A (en)*2024-07-112024-08-16南京陆加壹智能科技有限公司 A sit-up intelligent testing method and system

Also Published As

Publication numberPublication date
CN113255622B (en)2021-09-21

Similar Documents

PublicationPublication DateTitle
CN113255622B (en)System and method for intelligently identifying sit-up action posture completion condition
CN113255623B (en) A system and method for intelligently identifying the status of push-up posture completion
CN113762133A (en)Self-weight fitness auxiliary coaching system, method and terminal based on human body posture recognition
CN112749684A (en)Cardiopulmonary resuscitation training and evaluating method, device, equipment and storage medium
CN114596451B (en)Body fitness testing method and device based on AI vision and storage medium
JP2020174910A (en) Exercise support system
EP3786971A1 (en)Advancement manager in a handheld user device
CN113262459B (en)Method, apparatus and medium for determining motion standard of sport body-building mirror
CN112818800A (en)Physical exercise evaluation method and system based on human skeleton point depth image
CN112827127A (en) A sit-up training system for physical education
CN120001024A (en) A kind of intelligent sports training teaching device for stadium
CN115068919B (en)Examination method of horizontal bar project and implementation device thereof
CN112966370A (en)Design method of human body lower limb muscle training system based on Kinect
CN115240247B (en) A recognition method and system for motion and posture detection
CN116510271A (en) A Taijiquan Intelligent Auxiliary Training and Evaluation System
CN114093032A (en) A Human Action Evaluation Method Based on Action State Information
CN114038054A (en)Pull-up detection device and method
CN113255624A (en)System and method for intelligently identifying completion condition of pull-up action gesture
CN111986260A (en)Image processing method and device and terminal equipment
CN111012357A (en) A device and method for detecting forward flexion of sitting body based on image recognition
CN115116125A (en)Push-up examination method and implementation device thereof
CN115116126A (en) Examination method of sit-ups and its realization device
CN117137434A (en)Intelligent physique detection system based on multiple sensors and detection method thereof
CN115761873A (en)Shoulder rehabilitation movement duration evaluation method based on gesture and posture comprehensive visual recognition
CN116343332A (en)Intelligent table tennis training method and system thereof

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
TR01Transfer of patent right
TR01Transfer of patent right

Effective date of registration:20220906

Address after:Room 2310, 23rd Floor, No. 24, Jianguomenwai Street, Chaoyang District, Beijing 100010

Patentee after:One Body Technology Co.,Ltd.

Address before:Room zt1009, science and technology building, No. 45, Zhaitang street, Mentougou District, Beijing 102300 (cluster registration)

Patentee before:Beijing Yiti Technology Co.,Ltd.

PE01Entry into force of the registration of the contract for pledge of patent right
PE01Entry into force of the registration of the contract for pledge of patent right

Denomination of invention:A System and Method of Intelligently Recognizing the Completion Status of Sit-up Movement Attitude

Effective date of registration:20230627

Granted publication date:20210921

Pledgee:Zhongguancun Beijing technology financing Company limited by guarantee

Pledgor:One Body Technology Co.,Ltd.

Registration number:Y2023990000325


[8]ページ先頭

©2009-2025 Movatter.jp