





技术领域technical field
本发明属于图像识别技术领域,特别是涉及一种基于空间手势与体势识别的急救姿势矫正方法。The invention belongs to the technical field of image recognition, and in particular relates to an emergency posture correction method based on spatial gesture and body posture recognition.
背景技术Background technique
随着人工智能领域逐步发展,互联网+医疗的模式也渐渐兴起。互联网医疗,是互联网在医疗行业的新应用,其包括以互联网为载体和技术手段的医疗教学。本发明依托该理论将医疗教学与计算机视觉技术相结合。医疗急救作为医疗方面的基础教学,教学过程中需要保证动作标准,其中包含判断呼吸,判断大动脉搏动,胸外按压等方面。其中每一种急救方式都有标准动作和标准判定方式,在判断动作是否规范时会因为人眼而存在较大的误差,因此一个可以依靠计算机视觉技术的智能系统则可以极大提高对动作准确度的判断能力。可将该套检测系统与AR技术相结合,将智能检测系统搭载到智能穿戴设备(如AR眼镜)。With the gradual development of the field of artificial intelligence, the model of Internet + medical care has gradually emerged. Internet medical treatment is a new application of the Internet in the medical industry, which includes medical teaching using the Internet as a carrier and technical means. Based on the theory, the present invention combines medical teaching with computer vision technology. As the basic teaching of medical treatment, medical first aid needs to ensure the standard of action during the teaching process, including judging breathing, judging aortic pulse, chest compression and so on. Each of the first aid methods has standard actions and standard judgment methods. When judging whether the actions are standardized, there will be large errors due to the human eye. Therefore, an intelligent system that can rely on computer vision technology can greatly improve the accuracy of actions. degree of judgment. The detection system can be combined with AR technology, and the intelligent detection system can be mounted on smart wearable devices (such as AR glasses).
在突发性疾病发作的初期,专业救护人员无法技术赶到时,简单的急救措施可以大幅提高病人的生存几率,因此许多非专业人士也应该提前掌握简单有效的急救手段,如判断呼吸,判断大动脉搏动,胸外按压等。医疗急救动作的准确同时需要重视,此时一套高效智能的检测系统则是更好的选择。In the early stage of a sudden disease, when professional ambulance personnel are unable to arrive technically, simple first aid measures can greatly improve the patient's survival rate. Therefore, many non-professionals should also master simple and effective first aid methods in advance, such as judging breathing, judging Aortic pulsation, chest compressions, etc. At the same time, the accuracy of medical emergency actions needs to be paid attention to. At this time, an efficient and intelligent detection system is a better choice.
目前现有技术中海姆立克急救动作的测试装置是一种穿戴装置,用于医疗教学过程中的动作矫正装置。该装置是通过假人模具与各种参数测量仪器实现对参数的收集。各种仪器会增加穿戴设备的重量,同时影响到急救学习人员在急救过程中的动作标准,从而影响学习的效率。仪器长时间使用后会影响仪器数值准确率。基于视觉的人体工作矫正系统,虽然用到了机器视觉技术,但其只对骨骼和身体动作进行检测,没有未做到对手部动作的精准检测,无法解决急救细节上存在的问题。对于医疗急救这种对细节严格要求的事情,简单的整体推测是无法保证其有效性,只有在意到每一处细节才能够保证急救的有效性,不浪费时间。At present, the testing device for Heimlich emergency action in the prior art is a wearable device, which is used as an action correction device in the medical teaching process. The device collects parameters through dummy molds and various parameter measuring instruments. Various instruments will increase the weight of the wearable equipment, and at the same time affect the action standards of the first aid learners during the first aid process, thereby affecting the efficiency of learning. Long-term use of the instrument will affect the accuracy of the instrument's numerical value. Although the vision-based human work correction system uses machine vision technology, it only detects bones and body movements, does not accurately detect hand movements, and cannot solve the problems in emergency details. For medical emergency, which requires strict requirements on details, simple overall speculation cannot guarantee its effectiveness. Only by paying attention to every detail can the effectiveness of first aid be guaranteed without wasting time.
发明内容SUMMARY OF THE INVENTION
本发明为了解决现有技术的问题,提出了一种基于空间手势与体势识别的急救姿势矫正方法。In order to solve the problems of the prior art, the present invention proposes an emergency posture correction method based on spatial gesture and body posture recognition.
本发明是通过以下技术方案实现的,本发明提出一种基于空间手势与体势识别的急救姿势矫正方法,所述方法包括:The present invention is achieved through the following technical solutions, and the present invention proposes an emergency posture correction method based on spatial gesture and body posture recognition, the method comprising:
步骤一、获取待检测图像,将待检测图像转换为RGB图像;Step 1: Obtain an image to be detected, and convert the image to be detected into an RGB image;
步骤二、根据RGB图像进行人脸检测和呼吸与颈动脉区域选取;Step 2, perform face detection and respiration and carotid artery region selection according to the RGB image;
步骤三、根据RGB图像进行手势检测和指尖检测;Step 3. Perform gesture detection and fingertip detection according to the RGB image;
步骤四、判断指尖是否处于目标区域;Step 4. Determine whether the fingertip is in the target area;
步骤五、检测计时,从而完成矫正;Step 5. Check the timing to complete the correction;
所述步骤二和步骤三执行顺序不分先后。The steps 2 and 3 are executed in no particular order.
进一步地,所述人脸检测具体为:根据RGB图像框选人脸整体区域,依靠人脸各部分特征进行人脸对齐操作,根据机器学习模型检测人脸各个关键点,获取关键点位置信息。Further, the face detection is specifically: frame selection of the entire face area according to the RGB image, perform face alignment operation relying on the features of each part of the face, detect key points of the face according to the machine learning model, and obtain the position information of the key points.
进一步地,对于呼吸区域选取具体为:根据人脸的关键点位置信息选取出鼻孔位置信息,以鼻孔位置为基准扩大适当区域作为鼻息判定区域;Further, selecting specifically for the breathing area is: select the nostril position information according to the key point position information of the human face, and take the nostril position as a benchmark to expand the appropriate area as the snoring determination area;
对于颈动脉区域选取具体为;通过测量假人人脸与颈动脉区域获得世界坐标系下的各点数据,利用数据分析将假人的人脸数据与颈动脉数据通过比例关系联系起来,通过上述操作获得的比例关系与RGB图像中人脸的关键点位置信息推算RGB图像中的颈动脉区域数据。The selection of the carotid artery area is as follows: by measuring the face and carotid artery area of the dummy, the data of each point in the world coordinate system is obtained, and the data analysis is used to connect the face data of the dummy and the carotid artery data through a proportional relationship. The scale relationship obtained by the operation and the key point position information of the face in the RGB image are used to calculate the carotid artery region data in the RGB image.
进一步地,所述手势检测具体为:根据RGB图像通过机器学习模型检测框选出手部区域,将获得的手势与急救标准手势进行匹配分析,确定是否为标准手势。Further, the gesture detection is specifically: selecting the hand area through the machine learning model detection frame according to the RGB image, and matching and analyzing the obtained gesture with the emergency standard gesture to determine whether it is a standard gesture.
进一步地,所述指尖检测具体为:根据RGB图像对已经确定是标准手势的手部区域进行细节检测,结合整个图像的信息获取指尖坐标。Further, the fingertip detection is specifically: performing detail detection on the hand region that has been determined to be a standard gesture according to the RGB image, and obtaining the fingertip coordinates in combination with the information of the entire image.
进一步地,所述步骤四具体为:获取手势中手指尖所在位置信息,获取所选区域位置信息,将位置信息分析判定手势所在目标区域位置是否正确。Further, the step 4 is specifically: obtaining the position information of the fingertip in the gesture, obtaining the position information of the selected area, and analyzing the position information to determine whether the position of the target area where the gesture is located is correct.
进一步地,所述检测计时具体为:Further, the detection timing is specifically:
步骤1、开始计时,判断时间是否大于5s,如果时间大于5s,则执行步骤2,如果没有超过5s,则继续计时;Step 1. Start timing, and judge whether the time is greater than 5s. If the time is greater than 5s, go to step 2. If it does not exceed 5s, continue timing;
步骤2、语音提示已超过5s,继续计时,判断时间是否大于10s,如果时间大于10s,则结束矫正,如果时间没有超过10s,则继续计时。Step 2. If the voice prompt has exceeded 5s, continue timing, and judge whether the time is greater than 10s. If the time is greater than 10s, end the correction. If the time does not exceed 10s, continue timing.
本发明还提出一种基于空间手势与体势识别的急救姿势矫正方法,所述方法包括:The present invention also proposes a first aid posture correction method based on spatial gesture and body posture recognition, the method comprising:
步骤一、获取待检测图像,将待检测图像转换为RGB图像;Step 1: Obtain an image to be detected, and convert the image to be detected into an RGB image;
步骤二、根据RGB图像进行胸外按压区域检测;Step 2: Detecting the chest compression area according to the RGB image;
步骤三、根据检测出的按压区域进行按压手势检测;Step 3: Detecting a pressing gesture according to the detected pressing area;
步骤四、根据手势的检测结果确定急救人员,对急救人员进行身体姿态检测,从而确定是否与标准姿势相符合,如不相符合则进行矫正调整。Step 4: Determine the emergency personnel according to the detection result of the gesture, and perform body posture detection on the emergency personnel to determine whether the posture conforms to the standard posture, and if not, correct and adjust.
进一步地,所述步骤二具体为:根据RGB图像中某区域已知的形状和颜色信息识别出按压区域。Further, the second step is specifically: identifying the pressing area according to the known shape and color information of a certain area in the RGB image.
进一步地,在步骤三和步骤四中,检测该按压区域内的手势,与标准动作的模板进行比对,确定手势是否正确,通过骨骼分析判定手势是否靠近假人,从而区分出假人和急救人员,根据急救人员的两个方位的图像信息进行姿态的骨骼分析,提取出上肢信息,计算各参数角度,通过机器学习模型对角度信息进行分析,可知是否与标准姿势相符合。Further, in step 3 and step 4, the gesture in the pressing area is detected, compared with the template of the standard action, to determine whether the gesture is correct, and determine whether the gesture is close to the dummy through bone analysis, thereby distinguishing the dummy and the first aid. Personnel, perform skeletal analysis of the posture according to the image information of the two directions of the emergency personnel, extract the upper limb information, calculate the angle of each parameter, and analyze the angle information through the machine learning model to know whether it conforms to the standard posture.
本发明在手势检测部分使用目标检测网络,对图像中的各个特征注意检测,精度更高。在现有的专利的技术中虽然对人体姿态进行了检测与分析,但是只能在宏观上对人体对坐是否合理进行分析。而本发明不仅在整体上进行数据录入与分析,同时还通过AR眼镜的近距离观测对手势进行了精细判断。本发明可以依托大量的数据,在不同的急救场景下,进行训练,增强其识别能力。The present invention uses the target detection network in the gesture detection part, pays attention to detection of each feature in the image, and has higher precision. Although the human body posture is detected and analyzed in the existing patented technology, it is only possible to analyze whether the human body sits opposite to each other on a macroscopic level. However, the present invention not only performs data entry and analysis on the whole, but also performs fine judgment on gestures through close-up observation of AR glasses. The present invention can rely on a large amount of data to carry out training in different emergency scenarios to enhance its recognition ability.
附图说明Description of drawings
图1为颈动脉按压流程图;Figure 1 is a flow chart of carotid artery compression;
图2为胸外按压流程图;Figure 2 is a flow chart of chest compression;
图3为假人胸部按压区域示意图;Figure 3 is a schematic diagram of the chest compression area of the dummy;
图4为按压手势标准正视与俯视示意图;4 is a schematic front view and a top view of a standard pressing gesture;
图5急救人员上肢动作标准正视图;Figure 5. Front view of the standard upper limb movements of emergency personnel;
图6急救人员上肢动作标准侧视图。Figure 6. Standard side view of upper limb movements of emergency personnel.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
结合图1-6,本发明提出一种基于空间手势与体势识别的急救姿势矫正方法,所述方法包括:1-6, the present invention proposes a first aid posture correction method based on spatial gesture and body posture recognition, the method includes:
步骤一、获取待检测图像,将待检测图像转换为RGB图像;Step 1: Obtain an image to be detected, and convert the image to be detected into an RGB image;
步骤二、根据RGB图像进行人脸检测和呼吸与颈动脉区域选取;Step 2, perform face detection and respiration and carotid artery region selection according to the RGB image;
步骤三、根据RGB图像进行手势检测和指尖检测;Step 3. Perform gesture detection and fingertip detection according to the RGB image;
步骤四、判断指尖是否处于目标区域;Step 4. Determine whether the fingertip is in the target area;
步骤五、检测计时,从而完成矫正;Step 5. Check the timing to complete the correction;
所述步骤二和步骤三执行顺序不分先后,流程中的检测部分可以互换位置,可以先检测手后检测区域,再进行比对。判断呼吸与大动脉搏动流程中不仅限于其步骤与组合方式。同时也可为了其他需求增加关键点完成其功能,关键点位置不可将步骤二的检测内容与步骤三的检测内容拆分。The steps 2 and 3 are executed in no particular order, and the positions of the detection parts in the process can be interchanged, and the hands can be detected first, and then the regions can be detected, and then compared. The process of judging respiration and aortic pulse is not limited to its steps and combinations. At the same time, key points can also be added to complete their functions for other needs. The location of the key points cannot separate the detection content of step 2 and the detection content of step 3.
所述人脸检测具体为:根据RGB图像框选人脸整体区域,依靠人脸各部分特征进行人脸对齐操作,根据机器学习模型检测人脸各个关键点,获取关键点位置信息。The face detection is specifically: frame selection of the entire face area according to the RGB image, perform face alignment operation relying on the features of each part of the face, detect key points of the face according to the machine learning model, and obtain the position information of the key points.
对于呼吸(鼻息)区域选取具体为:根据人脸的关键点位置信息选取出鼻孔位置信息,以鼻孔位置为基准扩大适当区域作为鼻息判定区域;The specific selection for the breathing (snoring) area is: selecting the nostril position information according to the key point position information of the human face, and expanding the appropriate area as the sniff determination area based on the nostril position;
对于颈动脉区域选取具体为;通过测量假人人脸与颈动脉区域获得世界坐标系下的各点数据,利用数据分析将假人的人脸数据与颈动脉数据通过比例关系联系起来,通过上述操作获得的比例关系与RGB图像中人脸的关键点位置信息推算RGB图像中的颈动脉区域数据。The selection of the carotid artery area is as follows: by measuring the face and carotid artery area of the dummy, the data of each point in the world coordinate system is obtained, and the data analysis is used to connect the face data of the dummy and the carotid artery data through a proportional relationship. The scale relationship obtained by the operation and the key point position information of the face in the RGB image are used to calculate the carotid artery region data in the RGB image.
所述手势检测具体为:根据RGB图像通过机器学习模型检测框选出手部区域,将获得的手势与急救标准手势进行匹配分析,确定是否为标准手势。The gesture detection is specifically as follows: selecting the hand area through the machine learning model detection frame according to the RGB image, and matching and analyzing the obtained gesture with the emergency standard gesture to determine whether it is a standard gesture.
所述指尖检测具体为:根据RGB图像对已经确定是标准手势的手部区域进行细节检测,结合整个图像的信息获取指尖坐标。The fingertip detection is specifically: performing detail detection on the hand region that has been determined to be a standard gesture according to the RGB image, and obtaining the coordinates of the fingertip in combination with the information of the entire image.
所述步骤四具体为:获取手势中手指尖所在位置信息,获取所选区域位置信息,将位置信息分析判定手势所在目标区域位置是否正确。The step 4 is specifically: obtaining the position information of the fingertip in the gesture, obtaining the position information of the selected area, and analyzing the position information to determine whether the position of the target area where the gesture is located is correct.
所述检测计时具体为:The detection timing is specifically:
步骤1、开始计时,判断时间是否大于5s,如果时间大于5s,则执行步骤2,如果没有超过5s,则继续计时;Step 1. Start timing, and judge whether the time is greater than 5s. If the time is greater than 5s, go to step 2. If it does not exceed 5s, continue timing;
步骤2、语音提示已超过5s,继续计时,判断时间是否大于10s,如果时间大于10s,则结束矫正,如果时间没有超过10s,则继续计时。Step 2. If the voice prompt has exceeded 5s, continue timing, and judge whether the time is greater than 10s. If the time is greater than 10s, end the correction. If the time does not exceed 10s, continue timing.
在人脸检测中使用了人脸检测,人脸对齐等基础算法。其中人脸检测是在图像中选出人脸所处位置。方案中先对图形使用轮廓检测法,其原理基于人脸的椭圆形状。又由于该方案暂时是针对假人的人脸部分,有其不变性,可将其先制作固定模板,通过模板匹配法更准确找到人脸部分。找到人脸的区域后,再依托人脸对齐,比对五官轮廓,确定其位置信息,再通过已确定位置推出脸上的其他特征点。手势检测用到了机器学习方法,利用目标检测网络中的基础网络,辅助卷积层,预测卷积层。卷积层将原始图像转换为特征映射图,使用多个卷积层对图像进行多层处理。Basic algorithms such as face detection and face alignment are used in face detection. Among them, face detection is to select the position of the face in the image. In the scheme, the contour detection method is used for the graphics first, and its principle is based on the ellipse shape of the human face. Also, because this scheme is temporarily aimed at the face part of the dummy, and has its invariance, it can be made into a fixed template first, and the face part can be found more accurately by the template matching method. After finding the face area, rely on face alignment, compare the facial features, determine its position information, and then deduce other feature points on the face through the determined position. Gesture detection uses machine learning methods, using the basic network in the target detection network, auxiliary convolution layers, and prediction convolution layers. The convolutional layer converts the original image into a feature map, using multiple convolutional layers to process the image in multiple layers.
本发明还提出一种基于空间手势与体势识别的急救姿势矫正方法,所述方法包括:The present invention also proposes a first aid posture correction method based on spatial gesture and body posture recognition, the method comprising:
步骤一、获取待检测图像,将待检测图像转换为RGB图像;Step 1: Obtain an image to be detected, and convert the image to be detected into an RGB image;
步骤二、根据RGB图像进行胸外按压区域检测;Step 2: Detecting the chest compression area according to the RGB image;
步骤三、根据检测出的按压区域进行按压手势检测;Step 3: Detecting a pressing gesture according to the detected pressing area;
步骤四、根据手势的检测结果确定急救人员,对急救人员进行身体姿态检测,从而确定是否与标准姿势相符合,如不相符合则进行矫正调整。Step 4: Determine the emergency personnel according to the detection result of the gesture, and perform body posture detection on the emergency personnel to determine whether the posture conforms to the standard posture, and if not, correct and adjust.
所述步骤二具体为:根据RGB图像中某区域已知的形状和颜色信息识别出按压区域。The second step is specifically: identifying the pressing area according to the known shape and color information of a certain area in the RGB image.
在步骤三和步骤四中,检测该按压区域内的手势,与标准动作的模板进行比对,确定手势是否正确,通过骨骼分析判定手势是否靠近假人,从而区分出假人和急救人员,根据急救人员的两个方位的图像信息进行姿态的骨骼分析,提取出上肢信息,利用数学模型计算各参数角度,通过机器学习模型对角度信息进行分析,可知是否与标准姿势相符合。In steps 3 and 4, the gesture in the pressing area is detected, compared with the template of the standard action to determine whether the gesture is correct, and the bone analysis is used to determine whether the gesture is close to the dummy, so as to distinguish the dummy and the emergency personnel. The image information of the two orientations of the emergency personnel is analyzed by the skeleton of the posture, the upper limb information is extracted, the angle of each parameter is calculated by the mathematical model, and the angle information is analyzed by the machine learning model to know whether it conforms to the standard posture.
人体的骨架是人体的固定组成部分,其具有一定的同样性,以其为基准来判断身体的姿态可以减少不确定性问题带来的误差。人体骨架有一些关节点可作为检测的关键点。该方案图像中可存在多人,先通过简单的所在位置分析,确认哪个为假人,哪个为急救人员。再对急救人员进行骨骼分析。The skeleton of the human body is a fixed part of the human body, and it has a certain similarity. Using it as a benchmark to judge the posture of the body can reduce the error caused by the uncertainty problem. The human skeleton has some joint points that can be used as key points for detection. There can be many people in the image of the scheme. First, through a simple location analysis, it is confirmed which one is a dummy and which one is an emergency worker. The first responders are then subjected to skeletal analysis.
实施例Example
具体实施例以胸外按压为基础展示。假人上带有胸外按压区域,同时急救人员带有AR眼镜,在场景分别存在两台固定摄像机用来获取急救时的正面图像与侧面图像。Specific embodiments are shown on the basis of chest compressions. There is a chest compression area on the dummy, and the emergency personnel are equipped with AR glasses. There are two fixed cameras in the scene to obtain the frontal image and side image during emergency.
通过AR眼镜带有的摄像头获取还未开始急救时的图像,该时刻对图像进行简单目标检测,通过形状与色彩检测按压区域。当该区域无法识别到时,开启手势检测,在胸外按压范围内对手势进行分析,与标准手势比对,确定其手势动作是否标准。如果手势不符合标准,AR眼镜提醒更改手势,同时在AR眼镜上显示正确手势。The image before first aid is obtained through the camera with the AR glasses. At this moment, simple target detection is performed on the image, and the pressed area is detected by shape and color. When the area cannot be recognized, turn on gesture detection, analyze the gesture within the chest compression range, and compare it with the standard gesture to determine whether the gesture action is standard. If the gesture does not meet the standard, the AR glasses remind you to change the gesture, and the correct gesture is displayed on the AR glasses at the same time.
获取两台固定相机的图像信息,对图像中各部分进行分类,将假人与急救人员区分。对图像进行骨骼分析,找到上肢区域和手部区域。对手部区域进行细节区分,判断手部与假人距离,当距离过高时提醒急救人员。Obtain image information from two fixed cameras, classify each part in the image, and distinguish dummies from first responders. Perform a skeletal analysis of the image to find the upper extremity area and the hand area. Distinguish the hand area in detail, judge the distance between the hand and the dummy, and alert the emergency personnel when the distance is too high.
对图像中的上肢进项标注,区分左右。分别获取上肢的角度信息,通过与标准动作时的角度对比来判定动作是否标准。标准动作应该保准上肢不弯曲,同时在侧视情况下与假人所在水平面垂直。Label the upper limbs in the image to distinguish left and right. Obtain the angle information of the upper limbs respectively, and judge whether the action is standard by comparing it with the angle of the standard action. The standard action should ensure that the upper limbs are not bent and are perpendicular to the horizontal plane of the dummy in side view.
以上对本发明所提出的一种基于空间手势与体势识别的急救姿势矫正方法进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。The first-aid posture correction method based on spatial gesture and body posture recognition proposed by the present invention has been introduced in detail above. In this paper, specific examples are used to illustrate the principle and implementation of the present invention. The description of the above embodiments is only used for In order to help understand the method of the present invention and its core idea; at the same time, for those skilled in the art, according to the idea of the present invention, there will be changes in the specific implementation and application scope. In summary, this specification The contents should not be construed as limiting the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210017939.3ACN114360065A (en) | 2022-01-07 | 2022-01-07 | A first aid posture correction method based on spatial gesture and body posture recognition |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210017939.3ACN114360065A (en) | 2022-01-07 | 2022-01-07 | A first aid posture correction method based on spatial gesture and body posture recognition |
| Publication Number | Publication Date |
|---|---|
| CN114360065Atrue CN114360065A (en) | 2022-04-15 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210017939.3APendingCN114360065A (en) | 2022-01-07 | 2022-01-07 | A first aid posture correction method based on spatial gesture and body posture recognition |
| Country | Link |
|---|---|
| CN (1) | CN114360065A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150194074A1 (en)* | 2014-01-08 | 2015-07-09 | Industrial Technology Research Institute | Cardiopulmonary resuscitation teaching system and method |
| US20150325148A1 (en)* | 2013-07-16 | 2015-11-12 | I.M.Lab Inc. | Cardio pulmonary resuscitation (cpr) training simulation system and method for operating same |
| CN105469679A (en)* | 2015-11-14 | 2016-04-06 | 辽宁大学 | Cardio-pulmonary resuscitation assisted training system and cardio-pulmonary resuscitation assisted training method based on Kinect |
| CN109549765A (en)* | 2018-11-23 | 2019-04-02 | 罗中兴 | Endowment management system and method applied to robot |
| CN111091732A (en)* | 2019-12-25 | 2020-05-01 | 塔普翊海(上海)智能科技有限公司 | Cardio-pulmonary resuscitation (CPR) guiding device and method based on AR technology |
| CN112469333A (en)* | 2018-07-26 | 2021-03-09 | 皇家飞利浦有限公司 | Device, system and method for detecting a pulse of a subject |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150325148A1 (en)* | 2013-07-16 | 2015-11-12 | I.M.Lab Inc. | Cardio pulmonary resuscitation (cpr) training simulation system and method for operating same |
| US20150194074A1 (en)* | 2014-01-08 | 2015-07-09 | Industrial Technology Research Institute | Cardiopulmonary resuscitation teaching system and method |
| CN105469679A (en)* | 2015-11-14 | 2016-04-06 | 辽宁大学 | Cardio-pulmonary resuscitation assisted training system and cardio-pulmonary resuscitation assisted training method based on Kinect |
| CN112469333A (en)* | 2018-07-26 | 2021-03-09 | 皇家飞利浦有限公司 | Device, system and method for detecting a pulse of a subject |
| CN109549765A (en)* | 2018-11-23 | 2019-04-02 | 罗中兴 | Endowment management system and method applied to robot |
| CN111091732A (en)* | 2019-12-25 | 2020-05-01 | 塔普翊海(上海)智能科技有限公司 | Cardio-pulmonary resuscitation (CPR) guiding device and method based on AR technology |
| Publication | Publication Date | Title |
|---|---|---|
| WO2018120964A1 (en) | Posture correction method based on depth information and skeleton information | |
| CN114973401B (en) | Standardized pull-up evaluation method based on motion detection and multimodal learning | |
| CN103479367B (en) | A kind of Driver Fatigue Detection based on facial movement unit identification | |
| JP6124308B2 (en) | Operation evaluation apparatus and program thereof | |
| CN108968973A (en) | A kind of acquisition of body gait and analysis system and method | |
| CN113856186B (en) | Pull-up action judging and counting method, system and device | |
| CN104207931A (en) | Accurate human face acupuncture point locating and acupuncture and moxibustion prescription learning method | |
| US12260673B1 (en) | Facial acupoint locating method, acupuncture method, acupuncture robot and storage medium | |
| CN113139962B (en) | System and method for scoliosis probability assessment | |
| CN108154503A (en) | A kind of leucoderma state of an illness diagnostic system based on image procossing | |
| CN112233515A (en) | Unmanned examination and intelligent scoring method applied to physician CPR examination | |
| CN106123911A (en) | A kind of based on acceleration sensor with the step recording method of angular-rate sensor | |
| CN111345823A (en) | A remote exercise rehabilitation method, device and computer-readable storage medium | |
| US11931166B2 (en) | System and method of determining an accurate enhanced Lund and Browder chart and total body surface area burn score | |
| CN111539245A (en) | CPR (CPR) technology training evaluation method based on virtual environment | |
| CN115346670A (en) | Parkinson's disease rating method based on posture recognition, electronic device and medium | |
| CN114998986A (en) | Computer vision-based pull-up action specification intelligent identification method and system | |
| Sharma et al. | Real-time recognition of yoga poses using computer vision for smart health care | |
| CN118749953A (en) | A method for extracting gait features and identifying abnormal gait in traditional Chinese medicine inspection | |
| CN115641646B (en) | CPR automatic detection quality control method and system | |
| CN116019443B (en) | Cardiopulmonary resuscitation chest compression compliance detection system and method | |
| CN117333932A (en) | Methods, equipment, equipment and media for identifying sarcopenia based on machine vision | |
| CN117038006A (en) | Method for performing rehabilitation training AI auxiliary diagnosis decision after upper and lower limb orthopedics operation | |
| CN116434277A (en) | Acupuncture point accurate positioning method based on cascade deep neural network | |
| CN115105821A (en) | Gymnastics training auxiliary system based on OpenPose |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |