Movatterモバイル変換


[0]ホーム

URL:


CN114639160A - Method for defining human head action, posture and joint relation through visual recognition - Google Patents

Method for defining human head action, posture and joint relation through visual recognition
Download PDF

Info

Publication number
CN114639160A
CN114639160ACN202210015299.2ACN202210015299ACN114639160ACN 114639160 ACN114639160 ACN 114639160ACN 202210015299 ACN202210015299 ACN 202210015299ACN 114639160 ACN114639160 ACN 114639160A
Authority
CN
China
Prior art keywords
head
recognition
human body
executing
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210015299.2A
Other languages
Chinese (zh)
Inventor
林明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Technology Shenzhen Co ltd
Original Assignee
Smart Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Technology Shenzhen Co ltdfiledCriticalSmart Technology Shenzhen Co ltd
Priority to CN202210015299.2ApriorityCriticalpatent/CN114639160A/en
Publication of CN114639160ApublicationCriticalpatent/CN114639160A/en
Withdrawnlegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a method for defining human head action, posture and joint relation through visual recognition, which comprises the following steps: s1, starting image acquisition equipment to start acquiring depth image data in real time; s2, carrying out human body recognition on the collected image frame data, and if the recognition fails, repeatedly executing S1, and if the recognition succeeds, executing S3; s3, extracting key point information of the human body; s4, judging whether the key points of the two eyes are symmetrical to the head point, if so, obtaining a conclusion that the face of the human body is opposite to the image acquisition equipment, and executing S6, otherwise, executing S5; s5, comprehensively considering the distance difference between the eye key point and the neck key point to obtain the head rotation direction and the head rotation angle, and then executing S6; s6, after the analysis result of the head rotating direction and angle is obtained; the invention solves the problem that the head action is difficult to further analyze in the conventional human body action recognition, and provides a simple and efficient method while the recognition technology is developed.

Description

Method for defining human head action, posture and joint relation through visual recognition
Technical Field
The invention relates to the technical field of visual recognition, in particular to a method for defining human head actions, postures and joint relations through visual recognition.
Background
In visual recognition, human body information can be recognized, but human body actions are difficult to define, such as the definition of postures and actions of human bodies, hands and feet, the definition of postures and actions of the feet and shoulders, the horizontal spread of the hands and the width of the feet and the shoulders. Regarding the definition of standard motion or gesture, there are individual differences in the images obtained from the camera, the angle and position of the image captured by the camera affect the input of the images, the recognition of the human body is affected, the human body is separated from the background, the data for bone recognition is affected, and the accuracy of the motion and gesture of the human body cannot be directly judged. Such as: people's both hands level is opened, and the camera is shot and is not put horizontally, and it is inclined to shoot out people's both arms, and this can't be clear from the figure whether both hands level is opened, and the same reason also has the action and the gesture of a lot of human bodies can't directly be discerned. The recognition of specific movements and gestures of the human body in visual recognition introduces many uncertainties and becomes unreliable.
Disclosure of Invention
The invention provides a method for defining human head action, posture and joint relation through visual recognition, which aims to solve the problems in the prior art.
To solve the above technical problem, an embodiment of the present invention provides the following solutions:
a method for defining human head movements, postures and joint relationships by visual recognition, comprising the steps of:
s1, starting image acquisition equipment to start acquiring depth image data in real time;
s2, carrying out human body recognition on the collected image frame data, and if the recognition fails, repeatedly executing S1, and if the recognition succeeds, executing S3;
s3, extracting key point information of the human body;
s4, judging whether the key points of the two eyes are in a symmetrical relation with the head point, if so, obtaining a conclusion that the face of the human body is opposite to the image acquisition equipment, and executing S6, otherwise, executing S5;
s5, comprehensively considering the distance difference between the eye key point and the neck key point to obtain the head rotation direction and the head rotation angle, and then executing S6;
s6, having the analysis results of the head rotation direction and angle, the user can further consider the complexity of the head movement to deduce what movement the head is performing.
Preferably, the complicated head motion condition in S6 includes raising the head and rotating the head, comparing the angle formed by the eyes, the neck and the head with the standard angle of the front face facing the lens.
Preferably, in S2, the human body is divided into 20 key points, each of the key points includes coordinate values in the three directions of X, Y and Z, and the origin of the coordinate is the image capturing lens.
Preferably, the eye skeleton points in S4 and S5 are referenced to cranial points.
Preferably, the 20 key points divided in S3 include:nose 0,neck 1, middle hip 2,right shoulder 3,left shoulder 4, right elbow 5, left elbow 6, right wrist 7, left wrist 8, right hip 9, left hip 10, right knee 11, left knee 12,right ankle 13, left ankle 14,right eye 15,left eye 16,right ear 17,left ear 18, right tiptoe 19, left tiptoe 20.
Preferably, the image capturing device in S1 includes an intelligent camera and a light supplement lighting device.
Preferably, the image capturing device in S1 is connected to a memory.
The scheme of the invention at least comprises the following beneficial effects:
the invention solves the problem that the head action is difficult to further analyze in the conventional human body action recognition, and provides a simple and efficient method for analyzing the head action while the recognition technology is developed. The human body posture recognition technology based on the depth map is utilized, and the real-time data collected by the head depth camera is subjected to human body detection and bone key point information extraction, so that the current head action of a user is sensed, the data analysis accuracy is increased, and the human-computer interaction experience is improved. In visual recognition, the method utilizes depth information to separate human body from background information to recognize human body skeleton information, and realizes the recognition of standard motion of the head of the human body, the recognition of human body posture and the position relation recognition of each joint of the head of the human body according to the human body skeleton information with the depth information.
Drawings
FIG. 1 is a block flow diagram of the system of the present invention;
FIG. 2 is one of key point diagrams of human bones according to the present invention;
FIG. 3 is a second key point diagram of human skeleton according to the present invention;
FIG. 4 is a third key point diagram of human bone according to the present invention;
FIG. 5 is a fourth diagram of a key point diagram of human bones according to the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1 to 5, the present embodiment provides a method for defining human head motion, posture and each joint relation by visual recognition, comprising the steps of:
s1, starting image acquisition equipment to start acquiring depth image data in real time;
s2, carrying out human body recognition on the collected image frame data, and if the recognition fails, repeatedly executing S1, and if the recognition succeeds, executing S3;
s3, extracting key point information of the human body;
s4, judging whether the key points of the two eyes are symmetrical to the head point, if so, obtaining a conclusion that the face of the human body is opposite to the image acquisition equipment, and executing S6, otherwise, executing S5;
s5, comprehensively considering the distance difference between the eye key point and the neck key point to obtain the head rotation direction and the head rotation angle, and then executing S6;
s6, having the analysis results of the head rotation direction and angle, the user can further consider the complexity of the head movement to deduce what movement the head is performing.
Wherein the complicated head motion condition in S6 includes raising head and rotating head, and comparing the angle formed by eyes, neck and head with the standard angle of front face facing the lens.
In S2, the human body is divided into 20 key points, each of which includes coordinate values in the three directions of X, Y and Z, and the origin of coordinates is an image capturing lens.
Wherein the eye skeleton points in S4 and S5 are referenced as cranial points.
Wherein the 20 key points divided in S3 include:nose 0,neck 1, middle hip 2,right shoulder 3,left shoulder 4, right elbow 5, left elbow 6, right wrist 7, left wrist 8, right hip 9, left hip 10, right knee 11, left knee 12,right ankle 13, left ankle 14,right eye 15,left eye 16,right ear 17,left ear 18, right tiptoe 19, left tiptoe 20.
Wherein, image acquisition equipment in S1 includes intelligent camera and light filling illumination spare.
Wherein, the image acquisition device in the S1 is connected with a memory.
The invention solves the problem that the head action is difficult to further analyze in the conventional human body action recognition, and provides a simple and efficient method for analyzing the head action while the recognition technology is developed. The human body posture recognition technology based on the depth map is utilized, and the real-time data collected by the head depth camera is subjected to human body detection and bone key point information extraction, so that the current head action of a user is sensed, the data analysis accuracy is increased, and the human-computer interaction experience is improved. In visual recognition, the method utilizes depth information to separate human body from background information to recognize human body skeleton information, and realizes the recognition of standard motion of the head of the human body, the recognition of human body posture and the position relation recognition of each joint of the head of the human body according to the human body skeleton information with the depth information.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (7)

CN202210015299.2A2022-01-072022-01-07Method for defining human head action, posture and joint relation through visual recognitionWithdrawnCN114639160A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210015299.2ACN114639160A (en)2022-01-072022-01-07Method for defining human head action, posture and joint relation through visual recognition

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210015299.2ACN114639160A (en)2022-01-072022-01-07Method for defining human head action, posture and joint relation through visual recognition

Publications (1)

Publication NumberPublication Date
CN114639160Atrue CN114639160A (en)2022-06-17

Family

ID=81946627

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210015299.2AWithdrawnCN114639160A (en)2022-01-072022-01-07Method for defining human head action, posture and joint relation through visual recognition

Country Status (1)

CountryLink
CN (1)CN114639160A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116269228A (en)*2023-03-202023-06-23西安维康健智能技术有限公司 Method, device, electronic equipment and storage medium for estimating cervical rotation
CN116363756A (en)*2023-03-312023-06-30北京卡路里信息技术有限公司Method and device for identifying action orientation

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116269228A (en)*2023-03-202023-06-23西安维康健智能技术有限公司 Method, device, electronic equipment and storage medium for estimating cervical rotation
CN116269228B (en)*2023-03-202025-06-17西安维康健智能技术有限公司 A method, device, electronic device and storage medium for estimating cervical rotation
CN116363756A (en)*2023-03-312023-06-30北京卡路里信息技术有限公司Method and device for identifying action orientation

Similar Documents

PublicationPublication DateTitle
CN114067358B (en)Human body posture recognition method and system based on key point detection technology
CN106250867B (en)A kind of implementation method of the skeleton tracking system based on depth data
US9898651B2 (en)Upper-body skeleton extraction from depth maps
JP4692526B2 (en) Gaze direction estimation apparatus, gaze direction estimation method, and program for causing computer to execute gaze direction estimation method
CN111753747B (en)Violent motion detection method based on monocular camera and three-dimensional attitude estimation
CN111857334B (en) Human hand gesture letter recognition method, device, computer equipment and storage medium
CN109344694B (en) A real-time recognition method of basic human actions based on 3D human skeleton
JP2022536354A (en) Behavior prediction method and device, gait recognition method and device, electronic device, and computer-readable storage medium
CN110738154A (en)pedestrian falling detection method based on human body posture estimation
JP4936491B2 (en) Gaze direction estimation apparatus, gaze direction estimation method, and program for causing computer to execute gaze direction estimation method
JP5598751B2 (en) Motion recognition device
Vallabhaneni et al.The analysis of the impact of yoga on healthcare and conventional strategies for human pose recognition
CN113920326A (en) A Fall Behavior Recognition Method Based on Human Skeleton Keypoint Detection
CN111582158A (en)Tumbling detection method based on human body posture estimation
CN114639160A (en)Method for defining human head action, posture and joint relation through visual recognition
CN105989694A (en)Human body falling-down detection method based on three-axis acceleration sensor
CN118397692A (en)Human body action recognition system and method based on deep learning
CN114255508A (en)OpenPose-based student posture detection analysis and efficiency evaluation method
CN110334609B (en)Intelligent real-time somatosensory capturing method
CN114049683A (en) Auxiliary detection system, method and medium for post-healing rehabilitation based on three-dimensional human skeleton model
Yan et al.Human-object interaction recognition using multitask neural network
CN115240247B (en) A recognition method and system for motion and posture detection
CN113342167B (en)Space interaction AR realization method and system based on multi-person visual angle positioning
CN113327267A (en)Action evaluation method based on monocular RGB video
CN101241546A (en) A Method to Compensate the Distortion of Gait Binary Map

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
WW01Invention patent application withdrawn after publication

Application publication date:20220617

WW01Invention patent application withdrawn after publication

[8]ページ先頭

©2009-2025 Movatter.jp