Movatterモバイル変換


[0]ホーム

URL:


CN116328276B - Gesture interaction method, system, device and medium based on body building device - Google Patents

Gesture interaction method, system, device and medium based on body building device
Download PDF

Info

Publication number
CN116328276B
CN116328276BCN202111582175.4ACN202111582175ACN116328276BCN 116328276 BCN116328276 BCN 116328276BCN 202111582175 ACN202111582175 ACN 202111582175ACN 116328276 BCN116328276 BCN 116328276B
Authority
CN
China
Prior art keywords
interaction
area
gesture
interaction area
control instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111582175.4A
Other languages
Chinese (zh)
Other versions
CN116328276A (en
Inventor
顾才朋
陈忠
汤琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Fit Future Technology Co Ltd
Original Assignee
Chengdu Fit Future Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Fit Future Technology Co LtdfiledCriticalChengdu Fit Future Technology Co Ltd
Priority to CN202111582175.4ApriorityCriticalpatent/CN116328276B/en
Publication of CN116328276ApublicationCriticalpatent/CN116328276A/en
Application grantedgrantedCritical
Publication of CN116328276BpublicationCriticalpatent/CN116328276B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a gesture interaction method, a gesture interaction system, a gesture interaction device and a gesture interaction medium based on a body-building device, wherein the body-building device collects images of users in an area to obtain action images of the users; generating skeleton point information of a user according to the action image, and establishing a coordinate system according to the skeleton point information; establishing a plurality of interaction areas according to a coordinate system, wherein each interaction area is preset with a control instruction; identifying an interaction gesture according to the skeletal point information; judging whether the interaction gesture is positioned in the interaction area or not, and outputting a control instruction corresponding to the interaction area in which the interaction gesture is positioned. The invention has higher interaction efficiency, can better meet high-frequency operation, can realize instant interaction in the interaction process, reduces the residence time of the interaction gesture, effectively improves the interaction accuracy, and has no obvious waiting in the interaction process. Meanwhile, the interactive method has lower cost, can be effectively popularized and used, is stable, and has better universality.

Description

Gesture interaction method, system, device and medium based on body building device
Technical Field
The invention relates to the field of interaction, in particular to a gesture interaction method, a gesture interaction system, a gesture interaction device and a gesture interaction medium based on a body-building device.
Background
The gesture recognition technology is a key technology of new generation natural man-machine interaction, and compared with the traditional touch operation modes such as a mouse, a keyboard and the like, the gesture recognition technology has the advantages of being natural, visual, easy to understand, simple to operate, good in experience and the like, is more in line with daily communication habits of human beings, and becomes a research hotspot of man-machine interaction schemes. The gestures are used as a natural and convenient language, and are very suitable for man-machine interaction in emotion and practicality. The research significance of the gesture recognition technology is that the natural and visual communication mode of gestures is applied to the interface technology of human-computer interaction, so that the human-computer interface is closer to the use habit of human beings, and the human-computer interaction is more natural and convenient.
The intelligent body-building mirror is characterized in that the display screen is arranged on the mirror reflection layer, after the display screen is opened, a user can see images in the display screen and also can see actual images reflected by the mirror reflection layer to the surrounding environment, so that the user can check body-building videos in the display screen to do body-building actions in the exercise process and see own actions in the mirror reflection layer at the same time, further check whether own actions are consistent with actions in the body-building videos, further help the user to finish body-building training better, and improve the body-building exercise effect. Meanwhile, the intelligent body-building mirror can also utilize the real-time image signals in the camera acquisition area and send the signals to the data processing module, the data processing module receives and compares the real-time image signals with the standard image signals, prompts are given on the display screen according to the comparison result, and the intelligent body-building mirror is used for assisting a user in body-building, so that the user can enjoy relatively professional body-building guidance without going out, the body-building effect is improved, and the user experience is enhanced.
In the using process of the intelligent body-building mirror, the intelligent body-building mirror is often required to interact with a user, the user interacts with the body-building mirror after making actions or gestures so as to improve the body-building interestingness and the using convenience, the conventional interaction method can identify the actions after the fixed actions stay for a few seconds when in use, and therefore in the process of being used for a body-building device, the interaction method is low in operation smoothness and cannot be used for high-frequency interaction scenes. The existing interaction method also comprises the steps of accurately judging the gesture of a user through a 3D camera and mapping the human body outline into a screen, so that the user is allowed to operate elements at any position in the screen, and the interaction method has high flexibility, but is poor in stability and complex.
Disclosure of Invention
The invention aims to ensure instant interaction, support high-frequency operation in the interaction process, and improve the recognition accuracy of the interaction, the use efficiency and the flexibility. The invention provides a gesture interaction method, a gesture interaction system, a gesture interaction device and a gesture interaction medium based on a body-building device.
To achieve the above object, the present invention provides a posture interaction method based on an exercise device, including:
the exercise device collects images of the user in the area to obtain action images of the user;
Generating skeleton point information of a user according to the action image, and establishing a coordinate system according to the skeleton point information;
Establishing a plurality of interaction areas according to a coordinate system, wherein each interaction area is preset with a control instruction;
identifying an interaction gesture according to the skeletal point information;
Judging whether the interaction gesture is positioned in the interaction area or not, and outputting a control instruction corresponding to the interaction area in which the interaction gesture is positioned.
When the device is used, the camera device is arranged on the body-building device, a user stands in the acquisition area of the body-building device to perform body-building, the camera device on the body-building device acquires images of the user to obtain action images of the user, and bone point information of the user is generated according to the action images. When the system is used, a coordinate system is established by acquiring skeleton point information of a user, and a plurality of interaction areas are established in the coordinate system, wherein each interaction area corresponds to a preset control instruction. When the interactive gesture is identified according to the skeleton point information, the user is proved to make the interactive gesture, at the moment, whether the interactive gesture is positioned in an interactive area is judged, if the interactive gesture is positioned in one interactive area, a control instruction corresponding to the interactive area is output, and then interaction is carried out.
The interactive gesture of the invention is preset, is a specific action, and when the user makes the action and appears in a certain interactive area, the control instruction corresponding to the interactive area is output. In the invention, the interaction gesture is preferably that the hands are close to each other to make the clapping action, and one hand is a default state for a normal person, so that based on the operation of one hand, specific misjudgment prevention logic is necessarily added, and the inventor considers that clapping is a very good instant interaction action. On the user side, the clapping is not a default posture, and can be done by anyone and does very easy. The clapping action can be considered to express the user's operational will well and not confused with other natural actions. Technically, the clapping is an easy to determine action, which is accomplished by simply determining whether the left and right hand skeletal points are close together. Therefore, when the method is used, the interaction accuracy is higher, no remarkable waiting exists during recognition, high-frequency operation can be supported, the interaction recognition accuracy is high, and the use efficiency and the flexibility are effectively improved.
In the invention, a coordinate system is established according to bone point information, and the method specifically comprises the following steps:
Generating skeleton point information of a user according to the action image, wherein the skeleton point information comprises a head top skeleton point and a pelvic skeleton point;
Obtaining a linear distance between a head top bone point and a pelvic bone point, and obtaining a unit length of a coordinate system according to the linear distance;
And taking a straight line between the head top bone point and the pelvic bone point as an ordinate and the pelvic bone point as an origin, and constructing a rectangular coordinate system according to the acquired unit length.
The distance between the head bone point and the pelvic bone point is chosen as a key reference point, firstly because for users of different body types, the distance between the head bone point and the pelvic bone point is generally stable, and is suitable for determining the unit length, and for the unit length of the invention, in order to establish the length of each unit length 1 after the coordinate system, the determined unit length has a relation with the subsequent judgment of the distance between the hands, the judgment of whether the distance is in the interaction area and the size of the interaction area, and in the invention, the unit length is preferably one sixth of the straight line distance between the head bone point and the pelvic bone point, and based on the unit length, the method can be suitable for almost all people. After the coordinate system is determined, the interactive gesture is identified according to the bone point information, and the method specifically comprises the following steps:
acquiring skeleton point information, wherein the skeleton point information comprises left-hand skeleton points and right-hand skeleton points;
acquiring the distance between a left-hand bone point and a right-hand bone point;
Comparing the distance between the left hand bone point and the right hand bone point with a threshold T, and if the distance between the left hand bone point and the right hand bone point is smaller than or equal to the threshold T, identifying the interaction gesture; if the distance between the left hand bone point and the right hand bone point is greater than the threshold T, no interaction gesture is recognized.
Acquiring middle point coordinates between a left hand bone point and a right hand bone point;
judging whether the coordinates of the intermediate points are located in the interaction area or not;
If the coordinates of the intermediate points are located in an interaction area, the interaction area is in an activated state, and a control instruction corresponding to the interaction area is output; the other interaction zone is inactive.
For the invention, the threshold T is a preset length and is used for judging whether the hands are close, in actual use, the hands are not necessarily close together when the user makes a clapping action, so the invention judges whether the hands are close through the threshold T, the recognition accuracy is higher, when the distance between the left hand skeleton point and the right hand skeleton point is smaller than or equal to the threshold T, the hands are proved to be close, the clapping action is made, namely, the interactive gesture is made, at the moment, the coordinate information of the middle point between the left hand skeleton point and the right hand skeleton point is acquired, whether the interactive gesture is positioned in a certain interactive area is judged through the coordinate information of the middle point between the left hand skeleton point and the right hand skeleton point, when judging, each interactive area is arranged in a coordinate system according to the need, when the middle point coordinate information is positioned in a certain interactive area in the coordinate system, the output result is the output of a control instruction corresponding to the interactive area.
In the invention, the position and the area of each interaction area in the coordinate system can be set according to actual needs, can be set in four quadrants in the coordinate system, can be set in areas where a plurality of quadrants overlap as required, and can be set through coordinate points when the specific position and the specific area are set without limiting the specific position and the specific area of the interaction area in the coordinate system. When the three interaction areas exist, a first interaction area is established in a coordinate system by taking a head top skeleton point as a center, and a first control instruction is preset in the first interaction area; and establishing a second interaction area and a third interaction area by taking the pelvic bone point as an origin in the coordinate system, wherein the second interaction area is preset with a second control instruction, and the third interaction area is preset with a third control instruction.
When five interaction areas exist, a first interaction area is established in a coordinate system by taking a head top skeleton point as a center, and a first control instruction is preset in the first interaction area; and establishing a second interaction area and a third interaction area by taking the pelvic bone point as an origin in the coordinate system, wherein the second interaction area is preset with a second control instruction, and the third interaction area is preset with a third control instruction. And establishing a fourth interaction area and a fifth interaction area by taking the overhead bone point as an origin in the coordinate system, wherein the fourth interaction area is preset with a fourth control instruction, and the fifth interaction area is preset with a fifth control instruction.
On the basis, in order to further avoid misjudgment, or after the interaction gesture of the user appears in a certain interaction area, in the process of continuous movement, if the interaction gesture of the user appears in the interaction area adjacent to the interaction area, a control instruction of another interaction area is triggered again, so that misjudgment is easy to occur in the process of continuous movement, and the experience degree and fluency of the user are reduced, therefore, if the coordinates of the middle point are positioned in one interaction area, the interaction area is in an activated state, the area of the interaction area is increased, the control instruction corresponding to the interaction area is output, the other interaction areas are in an unactivated state, and the area of the other interaction areas is reduced; after the control instruction corresponding to the interaction area is output, the areas of all the interaction areas are restored. When the interaction method is used, if the interaction gesture is positioned in one interaction area, the interaction area is in an activated state, and the rest interaction areas are in an inactivated state; in the coordinate system, the area of the interaction region in the activated state increases, and the area of the interaction region in the deactivated state decreases.
When the interaction gesture is in one interaction region, a control instruction corresponding to the interaction region is output, at the moment, the area of the interaction region in a coordinate system is increased, other interaction regions are in an inactive state, and the area is reduced, so that in the continuous motion process, when the interaction gesture is in the interaction region in the inactive state, erroneous judgment cannot occur, for example, when the interaction gesture is a clapping palm, and the interaction region is provided with four interaction regions, the four interaction regions are respectively positioned in four quadrants of the coordinate system, and when a user makes the interaction gesture in one interaction region, the interaction region adjacent to the interaction region is easily triggered when the user puts down the hand, and thus erroneous judgment easily occurs.
Corresponding to the method in the invention, the invention also provides a gesture interaction system based on the body-building device, which comprises the following steps:
The acquisition module is used for acquiring the user image in the area by the body-building device to obtain the action image of the user;
The first identification module is used for acquiring skeleton point information of a user according to the action image and establishing a coordinate system according to the skeleton point information;
The interaction module is used for establishing a plurality of interaction areas according to a coordinate system, and each interaction area is preset with a control instruction;
The second recognition module is used for recognizing the interaction gesture according to the skeleton point information;
the judging module is used for judging whether the interaction gesture is positioned in the interaction area according to the interaction gesture and outputting a judging result;
the execution module is used for executing a control instruction corresponding to the interaction area where the interaction gesture is located according to the judgment result;
and the adjusting module is used for adjusting the area of the interaction area according to the judging result.
Corresponding to the method in the invention, the invention also provides an electronic device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of the gesture interaction method based on the body-building device when executing the computer program.
Corresponding to the method in the present invention, the present invention also provides a storage medium storing a computer program which, when executed by a processor, implements the steps of the exercise device-based gesture interaction method described above.
The one or more technical schemes provided by the invention have at least the following technical effects or advantages:
the invention has higher interaction efficiency, can better meet high-frequency operation, can realize instant interaction in the interaction process, reduces the residence time of the interaction gesture, effectively improves the interaction accuracy, and has no obvious waiting in the interaction process. Meanwhile, the interactive method has lower cost, can be effectively popularized and used, is stable, and has better universality.
Drawings
The accompanying drawings, which are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings:
FIG. 1 is a flow diagram of a gesture interaction method based on an exercise device;
FIG. 2 is a schematic diagram of the composition of a fitness device based gesture interaction system;
FIG. 3 is a schematic diagram of three interaction regions in a coordinate system in example 1;
fig. 4 is a schematic diagram of five interaction regions in a coordinate system in embodiment 2.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. In addition, the embodiments of the present invention and the features in the embodiments may be combined with each other without collision.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than within the scope of the description, and therefore the scope of the invention is not limited to the specific embodiments disclosed below.
It will be appreciated by those skilled in the art that in the present disclosure, the terms "longitudinal," "transverse," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," etc. refer to an orientation or positional relationship based on that shown in the drawings, which is merely for convenience of description and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore the above terms should not be construed as limiting the present invention.
It will be understood that the terms "a" and "an" should be interpreted as referring to "at least one" or "one or more," i.e., in one embodiment, the number of elements may be one, while in another embodiment, the number of elements may be plural, and the term "a" should not be interpreted as limiting the number.
Example 1
Referring to fig. 1, fig. 1 is a flow chart of a gesture interaction method based on an exercise device, and the invention provides a gesture interaction method based on an exercise device, which includes:
the exercise device collects images of the user in the area to obtain action images of the user;
Generating skeleton point information of a user according to the action image, wherein the skeleton point information comprises a head top skeleton point and a pelvic skeleton point;
Obtaining a linear distance between a head top bone point and a pelvic bone point, and obtaining a unit length of a coordinate system according to the linear distance;
And taking a straight line between the head top bone point and the pelvic bone point as an ordinate and the pelvic bone point as an origin, and constructing a rectangular coordinate system according to the acquired unit length.
Establishing a plurality of interaction areas according to a coordinate system, wherein each interaction area is preset with a control instruction;
acquiring skeleton point information, wherein the skeleton point information comprises left-hand skeleton points and right-hand skeleton points;
acquiring the distance between a left-hand bone point and a right-hand bone point;
Comparing the distance between the left hand bone point and the right hand bone point with a threshold T, and if the distance between the left hand bone point and the right hand bone point is smaller than or equal to the threshold T, identifying the interaction gesture; if the distance between the left hand bone point and the right hand bone point is greater than the threshold T, no interaction gesture is recognized.
Acquiring middle point coordinates between a left hand bone point and a right hand bone point;
judging whether the coordinates of the intermediate points are located in the interaction area or not;
If the coordinates of the intermediate points are located in an interaction area, the interaction area is in an activated state, and a control instruction corresponding to the interaction area is output; the other interaction zone is inactive.
The posture interaction method based on the exercise device in the present invention is described below with reference to specific examples:
Step 1, a body-building device collects images of a user in an area to obtain action images of the user;
Step 2, generating skeleton point information of a user according to the action image, and establishing a coordinate system according to the skeleton point information;
Step 2.1 bone point information comprises a head top bone point, a pelvic bone point, a left hand bone point and a right hand bone point;
Step 2.2, obtaining a linear distance between a head top bone point and a pelvic bone point, and obtaining a unit length of a coordinate system according to the linear distance;
Step 2.21, obtaining a linear distance between the head bone point and the pelvic bone point, wherein one sixth of the linear distance is used as a unit length of a coordinate system, namely, the unit length 1 of the coordinate system is one sixth of the linear distance;
Step 2.3, taking a straight line between the head top bone point and the pelvic bone point as an ordinate and the pelvic bone point as an origin, and constructing a rectangular coordinate system according to the acquired unit length;
Step 3, establishing a plurality of interaction areas according to a coordinate system;
In this embodiment, the total number of the interaction areas is 3, which are respectively a first interaction area, a second interaction area and a third interaction area, as shown in fig. 3, where the first interaction area is an area a in fig. 3, the second interaction area is an area B in fig. 3, and the third interaction area is an area C in fig. 3, where the first interaction area is a longitudinal 5 unit length centered on the overhead bone point, and 6 transverse unit lengths are used as the first interaction area, i.e. the length of the first interaction area is 12 unit lengths, and the height is 10 unit lengths. The first interaction area takes a head top skeleton point as a center, 5 unit lengths longitudinally upwards are taken as an upper side, and 5 unit lengths longitudinally downwards are taken as a lower side; taking the head top bone point as the center, 6 unit lengths of the head top bone point are left in the transverse direction, and 6 unit lengths of the head top bone point are right in the transverse direction. And outputting a control instruction corresponding to the first interaction area, namely outputting the determined control instruction when the interaction gesture is positioned in the first interaction area.
The control instruction of the second interaction area is leftward, the control instruction of the third interaction area is rightward, the second interaction area is an area with a pelvic bone point as a starting point, 2 unit lengths vertically upwards as an upper side, 10 unit lengths vertically downwards as a lower side, a head top bone point as a starting point, 3 unit lengths horizontally leftwards as a right side, and a length formed by 16 unit lengths horizontally leftwards as a left side as 13 unit lengths and a height as 12 unit lengths. The third interaction area is an area with a pelvic bone point as a starting point, 2 unit lengths vertically upwards as an upper side and 10 unit lengths vertically downwards as a lower side, a head bone point as a starting point, 3 unit lengths horizontally to the right as a left side, 16 unit lengths horizontally to the right as a right side, and 13 unit lengths and 12 unit lengths in length.
Step 4, identifying an interaction gesture according to the skeleton point information;
step 4.1, obtaining the distance between the left hand bone point and the right hand bone point;
Step 4.2, comparing the distance between the left hand bone point and the right hand bone point with a threshold value T, if the distance between the left hand bone point and the right hand bone point is smaller than or equal to the threshold value T, identifying an interaction gesture, in the embodiment, the threshold value T is 15cm, if the distance between the left hand bone point and the right hand bone point is smaller than or equal to the threshold value T, judging that the user makes the interaction gesture, and identifying the interaction gesture, in the embodiment, the interaction gesture is a clapping action, namely the user makes the clapping action; if the distance between the left hand bone point and the right hand bone point is greater than the threshold T, the user does not make a clapping action;
Step 4.3, the user makes an interaction gesture, and then the coordinates of the middle point between the left hand bone point and the right hand bone point are obtained;
step 5, judging whether the coordinates of the intermediate points are located in the interaction area;
Step 5.1, judging whether the coordinates of the intermediate points are located in the interaction area;
Step 5.11, if the coordinates of the intermediate point are located in one of the interaction areas, the interaction area is in an activated state, and a control instruction corresponding to the interaction area is output; the other interaction areas are in an inactive state;
step 5.12, outputting a control instruction corresponding to the interaction area, wherein the interaction area is in an activated state, and the rest interaction areas are in an inactivated state; the area of the interaction area in the activated state is increased, and the area of the interaction area in the unactivated state is reduced.
In this embodiment, the length of the interaction area in the active state is increased by 1.5 unit lengths, and the height is increased by 0.5 unit lengths, specifically, in this embodiment, the length of the interaction area in the active state is sequentially increased to the left and right sides, i.e., is expanded by 0.75 unit lengths, and the height is sequentially increased to the upper and lower ends, i.e., is expanded by 0.25 unit lengths. The length of the interactive area in the inactive state is reduced by 2 units, and the height is reduced by 1 unit. Specifically, the length of the interactive area in the inactive state is sequentially reduced to the left and right, i.e. reduced by 1 unit length, and the height is reduced to the upper and lower ends, i.e. reduced by 0.5 unit length.
When the control instruction corresponding to the interaction area is not output, the area of the interaction area is not changed, after the control instruction is output, the area of the interaction area is adjusted, and after the control instruction is executed, the area of the interaction area is restored to the original state.
Example two
In this embodiment, the exercise device is an intelligent exercise mirror with a display screen, when in use, a user stands in an acquisition area of the intelligent exercise mirror, the display screen can play exercise videos, and the device specifically comprises the following steps in an interaction process:
step 1, an intelligent body-building mirror collects images of a user in an area to obtain action images of the user;
Step 2, generating skeleton point information of a user according to the action image, and establishing a coordinate system according to the skeleton point information;
Step 2.1 bone point information comprises a head top bone point, a pelvic bone point, a left hand bone point and a right hand bone point;
Step 2.2, obtaining a linear distance between a head top bone point and a pelvic bone point, and obtaining a unit length of a coordinate system according to the linear distance;
Step 2.21, obtaining a linear distance between the head bone point and the pelvic bone point, wherein one sixth of the linear distance is used as a unit length of a coordinate system, namely, the unit length 1 of the coordinate system is one sixth of the linear distance;
Step 2.3, taking a straight line between the head top bone point and the pelvic bone point as an ordinate and the pelvic bone point as an origin, and constructing a rectangular coordinate system according to the acquired unit length;
Step 3, establishing a plurality of interaction areas according to a coordinate system;
In this embodiment, the total of five interaction areas is a first interaction area, a second interaction area, a third interaction area, a fourth interaction area and a fifth interaction area, respectively, as shown in fig. 4, where the first interaction area is an area a in fig. 4, the second interaction area is an area B in fig. 4, the third interaction area is an area C in fig. 4, the fourth interaction area is an area D in fig. 4, and the fifth interaction area is an area E in fig. 4.
The first interaction area is a first interaction area which takes a head top skeleton point as a center, takes 5 longitudinal unit lengths and 6 transverse unit lengths as the first interaction area, namely, the length of the first interaction area is 12 unit lengths, and the height of the first interaction area is 10 unit lengths. The first interaction area takes a head top skeleton point as a center, 5 unit lengths longitudinally upwards are taken as an upper side, and 5 unit lengths longitudinally downwards are taken as a lower side; taking the head top bone point as the center, 6 unit lengths of the head top bone point are left in the transverse direction, and 6 unit lengths of the head top bone point are right in the transverse direction. And the preset control instruction of the first interaction area is a determination A, namely when the interaction gesture is positioned in the first interaction area, the control instruction corresponding to the first interaction area is output, namely the control instruction for determining A is output.
The control instruction of the second interaction area is leftward, the control instruction of the third interaction area is rightward, the second interaction area is an area with a pelvic bone point as a starting point, 2 unit lengths vertically upwards as an upper side, 10 unit lengths vertically downwards as a lower side, a head top bone point as a starting point, 3 unit lengths horizontally leftwards as a right side, and a length formed by 16 unit lengths horizontally leftwards as a left side as 13 unit lengths and a height as 12 unit lengths. The third interaction area is an area with a pelvic bone point as a starting point, 2 unit lengths vertically upwards as an upper side and 10 unit lengths vertically downwards as a lower side, a head bone point as a starting point, 3 unit lengths horizontally to the right as a left side, 16 unit lengths horizontally to the right as a right side, and 13 unit lengths and 12 unit lengths in length.
The control instruction of the fourth interaction area is determined D, the control instruction of the fifth interaction area is determined E, the fourth interaction area is an area which takes a head top skeleton point as a starting point, takes 4 unit lengths vertically upwards as an upper side, takes 5 unit lengths vertically downwards as a lower side, takes the head top skeleton point as the starting point, takes 7 unit lengths horizontally leftwards as a right side, takes 16 unit lengths horizontally leftwards as a left side, takes 11 unit lengths and takes 9 unit lengths.
The fifth interaction area is an area with the head top skeleton point as a starting point, 4 unit lengths vertically upwards as an upper side and 5 unit lengths vertically downwards as a lower side, the head top skeleton point as a starting point, 7 unit lengths horizontally to the right as a left side, 16 unit lengths horizontally to the right as a right side, and the length of the area is 11 unit lengths and the height of the area is 9 unit lengths.
Step 4, identifying an interaction gesture according to the skeleton point information;
step 4.1, obtaining the distance between the left hand bone point and the right hand bone point;
Step 4.2, comparing the distance between the left hand bone point and the right hand bone point with a threshold value T, if the distance between the left hand bone point and the right hand bone point is smaller than or equal to the threshold value T, identifying an interaction gesture, in the embodiment, the threshold value T is 15cm, if the distance between the left hand bone point and the right hand bone point is smaller than or equal to the threshold value T, judging that the user makes the interaction gesture, and identifying the interaction gesture, in the embodiment, the interaction gesture is a clapping action, namely the user makes the clapping action; if the distance between the left hand bone point and the right hand bone point is greater than the threshold T, the user does not make a clapping action;
Step 4.3, the user makes an interaction gesture, and then the coordinates of the middle point between the left hand bone point and the right hand bone point are obtained;
step 5, judging whether the coordinates of the intermediate points are located in the interaction area;
Step 5.1, judging whether the coordinates of the intermediate points are located in the interaction area;
Step 5.11, if the coordinates of the intermediate point are located in one of the interaction areas, the interaction area is in an activated state, and a control instruction corresponding to the interaction area is output; the other interaction areas are in an inactive state;
step 5.12, outputting a control instruction corresponding to the interaction area, wherein the interaction area is in an activated state, and the rest interaction areas are in an inactivated state; the area of the interaction area in the activated state is increased, and the area of the interaction area in the unactivated state is reduced.
In this embodiment, the length of the interaction area in the active state is increased by 1.5 unit lengths, and the height is increased by 0.5 unit lengths, specifically, in this embodiment, the length of the interaction area in the active state is sequentially increased to the left and right sides, i.e., is expanded by 0.75 unit lengths, and the height is sequentially increased to the upper and lower ends, i.e., is expanded by 0.25 unit lengths. The length of the interactive area in the inactive state is reduced by 2 units, and the height is reduced by 1 unit. Specifically, the length of the interactive area in the inactive state is sequentially reduced to the left and right, i.e. reduced by 1 unit length, and the height is reduced to the upper and lower ends, i.e. reduced by 0.5 unit length.
When the control instruction corresponding to the interaction area is not output, the area of the interaction area is not changed, after the control instruction is output, the area of the interaction area is adjusted, and after the control instruction is executed, the area of the interaction area is restored to the original state.
After the user makes an interaction gesture in the first interaction area, the body-building mirror recognizes the interaction gesture, a control instruction corresponding to the first interaction area is output, the area of the first interaction area is enlarged, namely the upper side is longitudinally and upwards increased by 0.25 unit length, the lower side is longitudinally and downwards increased by 0.25 unit length, the left side is transversely and leftwards increased by 0.75 unit length, and the right side is transversely and rightwards increased by 0.75 unit length.
Example III
Referring to fig. 2, fig. 2 is a schematic diagram of a gesture interaction system based on an exercise device, and a second embodiment of the present invention provides a gesture interaction system based on an exercise device, where the system includes:
The acquisition module is used for acquiring the user image to obtain an action image of the user;
The first identification module is used for acquiring skeleton point information of a user according to the action image;
The interaction module is used for establishing a coordinate system according to the skeleton point information and establishing a plurality of interaction areas in the coordinate system;
the second recognition module is used for acquiring interaction attitude information according to the skeleton point information;
the judging module is used for judging whether the interaction gesture is positioned in the interaction area according to the interaction gesture information and outputting a judging result;
And the execution module is used for executing the control instruction corresponding to the interaction area according to the judgment result.
Example IV
A fourth embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the exercise device-based gesture interaction method when the computer program is executed.
The processor may be a central processing unit, or may be other general purpose processors, digital signal processors, application specific integrated circuits, off-the-shelf programmable gate arrays or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be used to store the computer program and/or the modules, and the processor may implement various functions of the exercise device-based gesture interaction device of the present invention by executing or executing data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and the like. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart memory card, secure digital card, flash memory card, at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
Example five
A fifth embodiment of the present invention provides a computer-readable storage medium storing a computer program that, when executed by a processor, implements the steps of the exercise device-based gesture interaction method.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ReadOnlyMemory, ROM), an erasable programmable read-only memory ((ErasableProgrammableReadOnlyMemory, EPROM) or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (7)

CN202111582175.4A2021-12-222021-12-22Gesture interaction method, system, device and medium based on body building deviceActiveCN116328276B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111582175.4ACN116328276B (en)2021-12-222021-12-22Gesture interaction method, system, device and medium based on body building device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111582175.4ACN116328276B (en)2021-12-222021-12-22Gesture interaction method, system, device and medium based on body building device

Publications (2)

Publication NumberPublication Date
CN116328276A CN116328276A (en)2023-06-27
CN116328276Btrue CN116328276B (en)2024-08-13

Family

ID=86889905

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111582175.4AActiveCN116328276B (en)2021-12-222021-12-22Gesture interaction method, system, device and medium based on body building device

Country Status (1)

CountryLink
CN (1)CN116328276B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116339497A (en)*2021-12-222023-06-27成都拟合未来科技有限公司Gesture interaction method, system and device and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104765454A (en)*2015-04-022015-07-08吉林大学Human muscle movement perception based menu selection method for human-computer interaction interface
CN110443167A (en)*2019-07-232019-11-12中国建设银行股份有限公司Intelligent identification Method, intelligent interactive method and the relevant apparatus of traditional culture gesture

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8994718B2 (en)*2010-12-212015-03-31Microsoft Technology Licensing, LlcSkeletal control of three-dimensional virtual world
CN104298353A (en)*2014-10-082015-01-21宁波熵联信息技术有限公司Inverse kinematics based vehicle monitoring and burglary preventing method and system
CN105681859A (en)*2016-01-122016-06-15东华大学Man-machine interaction method for controlling smart TV based on human skeletal tracking
CN110209273B (en)*2019-05-232022-03-01Oppo广东移动通信有限公司 Gesture recognition method, interactive control method, device, medium and electronic device
CN112926423B (en)*2021-02-072023-08-25青岛小鸟看看科技有限公司Pinch gesture detection and recognition method, device and system
CN113238650B (en)*2021-04-152023-04-07青岛小鸟看看科技有限公司Gesture recognition and control method and device and virtual reality equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104765454A (en)*2015-04-022015-07-08吉林大学Human muscle movement perception based menu selection method for human-computer interaction interface
CN110443167A (en)*2019-07-232019-11-12中国建设银行股份有限公司Intelligent identification Method, intelligent interactive method and the relevant apparatus of traditional culture gesture

Also Published As

Publication numberPublication date
CN116328276A (en)2023-06-27

Similar Documents

PublicationPublication DateTitle
US10048763B2 (en)Distance scalable no touch computing
CN103890696B (en) Certified Gesture Recognition
KR20220144890A (en) Method and system for controlling a device using hand gestures in a multi-user environment
US9703940B2 (en)Managed biometric identity
JP7152528B2 (en) Methods, apparatus and electronics for tracking multiple facials with facial special effects
US20140009378A1 (en)User Profile Based Gesture Recognition
US9619018B2 (en)Multimodal interactions based on body postures
JP2005202653A (en) Motion recognition apparatus and method, moving object recognition apparatus and method, apparatus control apparatus and method, and program
KR20130111234A (en)Natural user input for driving interactive stories
US11474627B2 (en)Method, apparatus and device for erasing handwriting on electronic whiteboard
KR101242848B1 (en)Virtual touch screen apparatus for generating and manipulating
Cafaro et al.Framed guessability: using embodied allegories to increase user agreement on gesture sets
CN111273777A (en) Control method, device, electronic device and storage medium for virtual content
CN116328276B (en)Gesture interaction method, system, device and medium based on body building device
JP2018005663A (en)Information processing unit, display system, and program
JP2024534472A (en) Touchless image-based input interface
KR101287948B1 (en)Method, apparatus, and computer readable recording medium for recognizing gestures
CN114610155B (en) Gesture control method, device, display terminal and storage medium
KR20240003467A (en)Video content providing system based on motion recognition
EP3779645B1 (en)Electronic device determining method and system, computer system, and readable storage medium
CN116414222A (en)Interaction method and device, computer readable storage medium and terminal
CN116092110A (en)Gesture semantic recognition method, electronic device, storage medium and program product
CN116339497A (en)Gesture interaction method, system and device and medium
JP2018005660A (en)Information processing device, program, position information creation method, and information processing system
CN116262171B (en)Body-building training method, system and device based on body-building device and medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp