Movatterモバイル変換


[0]ホーム

URL:


CN102081918A - Video image display control method and video image display device - Google Patents

Video image display control method and video image display device
Download PDF

Info

Publication number
CN102081918A
CN102081918ACN 201010612804CN201010612804ACN102081918ACN 102081918 ACN102081918 ACN 102081918ACN 201010612804CN201010612804CN 201010612804CN 201010612804 ACN201010612804 ACN 201010612804ACN 102081918 ACN102081918 ACN 102081918A
Authority
CN
China
Prior art keywords
palm
image
hand shape
gesture
control command
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010612804
Other languages
Chinese (zh)
Other versions
CN102081918B (en
Inventor
方伟
赵勇
袁誉乐
罗卫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Rui Technology Co Ltd
Original Assignee
Peking University Shenzhen Graduate School
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Shenzhen Graduate SchoolfiledCriticalPeking University Shenzhen Graduate School
Priority to CN 201010612804priorityCriticalpatent/CN102081918B/en
Publication of CN102081918ApublicationCriticalpatent/CN102081918A/en
Application grantedgrantedCritical
Publication of CN102081918BpublicationCriticalpatent/CN102081918B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Landscapes

Abstract

Translated fromChinese

本发明公开了一种视频图像显示控制方法和视频图像显示器,该视频图像显示控制方法通过实时采集显示设备前的场景,并从采集的实时场景图像中获取人体区域图像,再对该人体区域图像进行手势检测,并根据检测结果来确定该手势对应于手势数据库中的控制命令,最后输出该控制命令,视频图像显示器再根据该控制命令控制视频图像在显示设备上进行显示,从而完成用户与视频图像之间的主动交互,使得用户可以根据兴趣选择需要的信息,提高了用户与广告内容之间的交互效率,同时给用户带来全新体验。

Figure 201010612804

The invention discloses a video image display control method and a video image display. The video image display control method collects the scene in front of the display device in real time, obtains a human body region image from the collected real-time scene image, and then obtains the human body region image Carry out gesture detection, and determine that the gesture corresponds to the control command in the gesture database according to the detection result, and finally output the control command, and the video image display will control the video image to be displayed on the display device according to the control command, thereby completing the user and video The active interaction between images enables users to select the information they need according to their interests, improves the interaction efficiency between users and advertising content, and brings a new experience to users at the same time.

Figure 201010612804

Description

Translated fromChinese
一种视频图像显示控制方法及视频图像显示器A video image display control method and video image display

技术领域technical field

本发明涉及图像处理和人机交互领域,尤其涉及一种视频图像显示控制方法及视频图像显示器。The invention relates to the fields of image processing and human-computer interaction, in particular to a video image display control method and a video image display.

背景技术Background technique

近年来各类广告媒体竞争激烈,而数字看板作为一种全新的广告媒介首当其冲。数字看板作为一种广告媒体数字发展趋势下的产物,是一种通过终端显示设备发布各种广告信息的数字媒体系统,具有广告内容动态投放,满足个性化和差异化的自主服务,在特定的地点和时间对特定的人群进行广告信息播放的特性,因而获得了良好的广告效应,在商场、超市、酒店、医疗、影院以及其他人流汇聚的公共场所的市场应用潜力非常大,具有广阔市场前景。In recent years, there has been fierce competition among various advertising media, and digital signage, as a brand-new advertising medium, bears the brunt. As a product of the digital development trend of advertising media, digital signage is a digital media system that releases various advertising information through terminal display devices. The location and time feature of broadcasting advertising information to specific groups of people, thus obtaining a good advertising effect, the market application potential is very large in shopping malls, supermarkets, hotels, medical care, theaters and other public places where people gather, and has broad market prospects .

当前的数字看板都是按预先设定的播放方式自动播放广告图片或视频动画片段,当行人路过时,只能看到当前看板所显示的内容,不能随自己的意愿看自己感兴趣的广告内容。如果想知道其他未显示的广告的内容,需要驻足等待较长的时间,这是一种被动接受和无法预知广告内容的交互方式,人们往往不能够很容易得到自己想要的有用的广告内容,这样广告的效应也就大打折扣。The current digital billboards automatically play advertising pictures or video animation clips according to the preset playback method. When pedestrians pass by, they can only see the content displayed on the current billboard, and cannot watch the advertisement content they are interested in as they wish. . If you want to know the content of other advertisements that are not displayed, you need to stop and wait for a long time. This is an interactive way of passive acceptance and unpredictable advertisement content. People often cannot easily get the useful advertisement content they want. In this way, the effect of advertising will be greatly reduced.

发明内容Contents of the invention

本发明要解决的主要技术问题是,提供一种视频图像显示控制方法及视频图像显示器。本发明实现了用户与视频图像之间的主动交互,让用户自己轻松选择感兴趣的信息,从而提高信息的交互效率。The main technical problem to be solved by the present invention is to provide a video image display control method and a video image display. The invention realizes the active interaction between the user and the video image, allows the user to easily select the information of interest, thereby improving the efficiency of information interaction.

为解决上述技术问题,本发明采用的技术方案如下:In order to solve the problems of the technologies described above, the technical scheme adopted in the present invention is as follows:

一种视频图像显示控制方法,包括步骤:A video image display control method, comprising the steps of:

A、采集显示设备前的实时场景图像;A. Collect and display real-time scene images in front of the device;

B、对所述实时场景图像进行人体检测,并获取人体区域图像;B. Perform human body detection on the real-time scene image, and obtain an image of a human body region;

C、在所述人体区域图像中检测手势;C. Detecting gestures in the human body region image;

D、确定所述手势所对应的控制命令;D. Determine the control command corresponding to the gesture;

E、根据所述控制命令控制视频图像在显示设备上的显示。E. Control the display of the video image on the display device according to the control command.

其中,所述步骤B包括:将当前获取的实时场景图像帧与根据背景模型所得的参考图像进行比较从而检测到人体区域图像。Wherein, the step B includes: comparing the currently acquired real-time scene image frame with the reference image obtained according to the background model so as to detect the human body region image.

进一步地,所述检测人体区域图像的步骤包括:Further, the step of detecting an image of a human body region includes:

将当前获取的实时场景图像帧与根据背景模型所得的参考图像进行像素级的减除操作,得到差分图像;Perform a pixel-level subtraction operation on the currently acquired real-time scene image frame and the reference image obtained according to the background model to obtain a difference image;

将所述差分图像进行二值化处理,得到二值化差分图像;Binarizing the differential image to obtain a binary differential image;

对所述二值化差分图像进行形态学处理;performing morphological processing on the binarized difference image;

将符合预定连通规则的二值化差分图像进行连通处理,获取连通区域;Connecting the binarized difference images that meet the predetermined connectivity rules to obtain connected regions;

判断各连通区域是否为噪声区,若是则删除;Determine whether each connected area is a noise area, and if so, delete it;

将由最后留下的所有连通区域组成的区域图像作为人体区域图像,并输出所述人体区域图像。Taking the region image composed of all the remaining connected regions as the human body region image, and outputting the human body region image.

进一步地,上述的方法还包括步骤:判断当前获取的实时场景图像帧中各像素点是否属于检测到的人体区域中的像素点,若是,则背景模型保持不变,否则更新背景模型。Further, the above method further includes the step of: judging whether each pixel point in the currently acquired real-time scene image frame belongs to the pixel point in the detected human body area, if so, keep the background model unchanged, otherwise update the background model.

其中,所述手势包括手掌的手形,所述步骤C包括:Wherein, the gesture includes the hand shape of the palm, and the step C includes:

对所述人体区域图像进行手掌目标检测,并获取手掌目标区域图像;Carrying out palm target detection on the image of the human body area, and acquiring an image of the palm target area;

对所述手掌目标区域图像进行手形特征提取;Carry out hand shape feature extraction to described palm target region image;

根据提取的所述手掌的手形特征和预先建立的手形分类器进行手形识别,判断所述手掌的手形是否为有效手形;Carrying out hand shape recognition according to the hand shape features of the extracted palm and the pre-established hand shape classifier, and judging whether the hand shape of the palm is an effective hand shape;

在步骤D中,当判断该手掌的手形为有效手形时,根据预先建立的手势数据库确定该有效手形对应的控制命令;或In step D, when it is judged that the hand shape of the palm is an effective hand shape, determine the control command corresponding to the effective hand shape according to the pre-established gesture database; or

所述手势包括手掌的手形和手掌的运动轨迹,所述步骤C包括:The gesture includes the shape of the palm and the motion track of the palm, and the step C includes:

对所述人体区域图像进行手掌目标检测,并获取手掌目标区域图像;Carrying out palm target detection on the image of the human body area, and acquiring an image of the palm target area;

对所述手掌目标区域图像进行手形特征提取;Carry out hand shape feature extraction to described palm target region image;

根据提取的该手掌的手形特征和预先建立的手形分类器进行手形识别,判断该手掌的手形是否为有效手形,当判断该手掌的手形为有效手形时,将该手掌标识为当前激活手掌;Carry out hand shape recognition according to the hand shape feature of the extracted palm and the hand shape classifier established in advance, judge whether the hand shape of the palm is an effective hand shape, when judging that the hand shape of the palm is an effective hand shape, identify the palm as the currently activated palm;

检测当前激活手掌的运动轨迹,确定当前激活手掌的运动类型;Detect the movement trajectory of the currently activated palm, and determine the movement type of the currently activated palm;

在步骤D中,在预先建立的手势数据库中,根据该有效手形和当前激活手掌的运动类型确定对应的控制命令;In step D, in the gesture database established in advance, determine the corresponding control command according to the effective hand shape and the motion type of the currently activated palm;

在步骤E中,根据所述控制命令切换相应的视频图像或对当前显示的视频图像进行操作。In step E, the corresponding video image is switched or the currently displayed video image is operated according to the control command.

进一步地,对所述人体区域图像进行手掌目标检测的步骤包括:Further, the step of performing palm target detection on the human body region image includes:

对所述人体区域图像部分进行肤色检测,获取包含人脸、手臂或手掌区域图像;Carry out skin color detection on the image of the human body region, and obtain an image including the face, arm or palm region;

根据预先建立的肤色检测模型得到手臂和/或手掌的区域图像;Obtain the area image of the arm and/or palm according to the pre-established skin color detection model;

在手臂和/或手掌的区域图像中检测出手掌。The palm is detected in the image of the region of the arm and/or the palm.

进一步地,所述在手臂和/或手掌的区域图像中检测出手掌的步骤包括:Further, the step of detecting the palm in the area image of the arm and/or palm includes:

判断所述手臂和/或手掌的区域图像的长宽比是否大于2,若是,则判定该区域为手臂和手掌区域图像,否则为手掌区域图像;Determine whether the aspect ratio of the region image of the arm and/or palm is greater than 2, if so, determine that the region is an arm and palm region image, otherwise it is a palm region image;

当判定为手臂和手掌区域图像时,对所述手臂和手掌区域图像进行边缘检测,获取边缘信息,得到区域轮廓;When it is determined as an image of an arm and a palm area, edge detection is performed on the image of an arm and a palm area, and edge information is acquired to obtain an area outline;

对所述区域轮廓进行最小外接椭圆拟合,获得所述外接椭圆的信息;performing minimum circumscribing ellipse fitting on the outline of the region to obtain information on the circumscribing ellipse;

根据所述外接椭圆的信息,获取所述区域轮廓的方向信息,从而最终获取所述手臂和手掌的指向;According to the information of the circumscribed ellipse, obtain the direction information of the outline of the region, so as to finally obtain the orientation of the arm and palm;

对已获取指向的所述手臂区域图像和手掌区域图像进行图像矫正,使得手臂和手掌指向为竖直向上;Image correction is performed on the arm region image and the palm region image that have been pointed, so that the arm and palm point vertically upward;

在矫正后的所述手臂和手掌区域图像上进行手掌定位检测,获取手掌目标区域图像。Perform palm location detection on the corrected images of the arm and the palm area, and acquire an image of the palm target area.

对应于上述的方法,本发明还提供一种视频图像显示器,包括:Corresponding to the above method, the present invention also provides a video image display, comprising:

摄像装置,用于采集显示设备前的实时场景图像;The camera device is used to collect real-time scene images in front of the display device;

人体检测装置,用于对所述实时场景图像进行人体检测,获取人体区域图像;A human body detection device, configured to perform human body detection on the real-time scene image, and obtain an image of a human body area;

手势检测装置,用于在所述人体区域图像中检测手势;Gesture detection means for detecting gestures in said human body region image;

控制命令确定装置,用于确定手势所对应的控制命令;a control command determining device, configured to determine the control command corresponding to the gesture;

图像显示控制装置,用于根据所述控制命令控制视频图像在显示设备上的显示。The image display control device is used for controlling the display of the video image on the display device according to the control command.

进一步地,所述人体检测装置用于将当前获取的实时场景图像帧与根据背景模型所得的参考图像进行比较从而检测到人体区域图像。Further, the human body detection device is used to compare the currently acquired real-time scene image frame with the reference image obtained according to the background model, so as to detect the human body area image.

上述的视频图像显示器,还包括背景更新装置,用于判断当前获取的实时场景图像帧中各像素点是否属于检测到的人体区域中的像素点,若是则背景模型保持不变,否则更新背景模型。The above-mentioned video image display also includes a background update device, which is used to judge whether each pixel in the currently acquired real-time scene image frame belongs to a pixel in the detected human body area, if so, the background model remains unchanged, otherwise the background model is updated .

其中,所述手势包括手掌的手形,所述手势检测装置包括:Wherein, the gesture includes the hand shape of the palm, and the gesture detection device includes:

手掌检测单元,用于对所述人体区域图像进行手掌目标检测,并获取手掌目标区域图像;A palm detection unit, configured to perform palm target detection on the human body region image, and acquire a palm target region image;

手形特征提取单元,用于对所述手掌目标区域图像进行手形特征提取;The hand shape feature extraction unit is used to extract the hand shape feature from the palm target area image;

手形识别单元,用于根据提取的该手掌的手形特征和预先建立的手形分类器进行手形识别,判断该手掌的手形是否为有效手形;The hand shape recognition unit is used to carry out hand shape recognition according to the hand shape feature of the extracted palm and the pre-established hand shape classifier, and judge whether the hand shape of the palm is an effective hand shape;

所述控制命令确定装置当判断该手掌的手形为有效手形时,根据预先建立的手势数据库确定该有效手形对应的控制命令;或When the control command determination device determines that the hand shape of the palm is an effective hand shape, it determines the control command corresponding to the effective hand shape according to the pre-established gesture database; or

所述手势包括手掌的手形和手掌的运动轨迹,所述手势检测装置包括:The gesture includes the shape of the palm and the motion track of the palm, and the gesture detection device includes:

手掌检测单元,用于对所述人体区域图像进行手掌目标检测,并获取手掌目标区域图像;A palm detection unit, configured to perform palm target detection on the human body region image, and acquire a palm target region image;

手形特征提取单元,对所述手掌目标区域图像进行手形特征提取;A hand shape feature extraction unit, which extracts hand shape features from the palm target area image;

手形识别单元,根据提取的该手掌的手形特征和预先建立的手形分类器进行手形识别,判断该手掌的手形是否为有效手形,当判断该手掌的手形为有效手形时,将该手掌标识为当前激活手掌;The hand shape recognition unit performs hand shape recognition according to the hand shape feature of the extracted palm and the pre-established hand shape classifier, and judges whether the hand shape of the palm is an effective hand shape. Activate the palm;

手掌跟踪单元,用于检测当前激活手掌的运动轨迹,确定当前激活手掌的运动类型;The palm tracking unit is used to detect the motion track of the currently activated palm and determine the motion type of the currently activated palm;

所述控制命令确定装置用于在预先建立的手势数据库中,根据该有效手形和当前激活手掌的运动类型确定对应的控制命令;The control command determination device is used to determine the corresponding control command according to the effective hand shape and the motion type of the currently activated palm in the pre-established gesture database;

所述图像显示控制装置根据所述控制命令切换相应的视频图像或对当前显示的视频图像进行操作。The image display control device switches the corresponding video image or operates the currently displayed video image according to the control command.

本发明的有益效果是:The beneficial effects of the present invention are:

本发明的视频图像显示控制方法和视频图像显示器,通过对视频图像显示器前的场景进行采集,并提取其中的人体区域图像,再从该人体区域图像中提取用户相应的手势,从而根据该手势来确定其相应的控制命令,视频图像显示器再根据该控制命令控制相应的视频图像进行显示,从而完成了用户与视频图像之间的主动交互。用户通过本发明的方法和装置,能够根据自己感兴趣的内容主动地,选择性地查看。本发明采用的技术方案,使得用户与装置之间实现了主动交互,提高了视频图像与用户之间的交互效率,从而提高了视频图像自身的宣传效果同时给用户带来全新体验。In the video image display control method and the video image display of the present invention, by collecting the scene in front of the video image display, and extracting the image of the human body region, and then extracting the corresponding gesture of the user from the image of the human body region, according to the gesture The corresponding control command is determined, and the video image display controls the corresponding video image to display according to the control command, thereby completing the active interaction between the user and the video image. Through the method and device of the present invention, users can actively and selectively view content according to their own interests. The technical solution adopted in the present invention enables active interaction between the user and the device, improves the interaction efficiency between the video image and the user, thereby improving the propaganda effect of the video image itself and bringing a new experience to the user.

附图说明Description of drawings

图1为本发明视频图像显示器的一种实施例的框图;Fig. 1 is the block diagram of a kind of embodiment of video image display of the present invention;

图2为本发明视频图像显示器的另一种实施例的框图;Fig. 2 is the block diagram of another kind of embodiment of video display of the present invention;

图3a为图1中手势检测装置的一种实施例的框图;Fig. 3 a is a block diagram of an embodiment of the gesture detection device in Fig. 1;

图3b为图1中手势检测装置的另一种实施例的框图;Fig. 3b is a block diagram of another embodiment of the gesture detection device in Fig. 1;

图4为图1中手掌检测单元的一种实施例的示意图;Fig. 4 is a schematic diagram of an embodiment of the palm detection unit in Fig. 1;

图5为本发明视频图像显示控制方法的一种实施例的流程图;FIG. 5 is a flowchart of an embodiment of the video image display control method of the present invention;

图6为图5中获取人体区域图像的流程图;Fig. 6 is the flow chart of obtaining human body area image in Fig. 5;

图7为图6中获取差分图像的流程图;Fig. 7 is the flow chart of obtaining differential image in Fig. 6;

图8为图6中区域连通性分析的流程图;Fig. 8 is a flowchart of regional connectivity analysis in Fig. 6;

图9为图7中更新背景模型的流程图;Fig. 9 is a flow chart of updating the background model in Fig. 7;

图10为图5中手势检测的流程图;Fig. 10 is a flowchart of gesture detection in Fig. 5;

图11为图10中手掌目标区域的获取的流程图;Fig. 11 is the flowchart of the acquisition of the palm target area in Fig. 10;

图12为图11中手掌定位及获取的流程图;Fig. 12 is a flowchart of palm location and acquisition in Fig. 11;

图13为图11中确定手掌运动类型的流程图;Fig. 13 is a flowchart of determining the type of palm movement in Fig. 11;

图14a、图14b、图14c、图14d、图14e和图14f为对应于图12手掌的定位与获取的一种实施例的示意图;Fig. 14a, Fig. 14b, Fig. 14c, Fig. 14d, Fig. 14e and Fig. 14f are schematic diagrams corresponding to an embodiment of the positioning and acquisition of the palm in Fig. 12;

图15a、图15b、图15c、图15d、图15e、图15f、图15g、图15h和图15i为图13中激活手掌的运动类型分类的一种实施例的示意图;Fig. 15a, Fig. 15b, Fig. 15c, Fig. 15d, Fig. 15e, Fig. 15f, Fig. 15g, Fig. 15h and Fig. 15i are schematic diagrams of an embodiment of activating the motion type classification of the palm in Fig. 13;

图16为图6中确定控制命令的一种实施例的示意图。FIG. 16 is a schematic diagram of an embodiment of determining a control command in FIG. 6 .

具体实施方式Detailed ways

下面通过具体实施方式结合附图对本发明作进一步详细说明。The present invention will be further described in detail below through specific embodiments in conjunction with the accompanying drawings.

近年来,计算机视觉技术已发展得日趋成熟并在很多领域得到广泛应用,在此背景下,通过计算机视觉来对人体的手形手势进行识别从而理解和解释人的动作行为,进而完成人机间的交互也成为了可能,本发明即是基于该计算机视觉技术的视频图像显示控制方法和视频图像显示器。In recent years, computer vision technology has become increasingly mature and widely used in many fields. In this context, computer vision is used to recognize human hand gestures to understand and explain human actions, and then complete the human-machine interaction. Interaction is also possible, and the present invention is a video image display control method and a video image display based on the computer vision technology.

请参考图1,本发明的一种视频图像显示器的一种实施例,包括:摄像装置1、人体检测装置2、手势检测装置3、控制命令确定装置4和图像显示控制装置5,其中摄像装置1与人体检测装置2相连,该人体检测装置2与手势检测装置3相连,该手势检测装置3与控制命令确定装置4相连,控制命令确定装置4与图像显示控制装置5相连。其中,摄像装置1用于采集图像显示控制装置5前的实时场景图像,并发送给人体检测装置2;人体检测装置2用于对接收的实时场景图像进行人体检测,获取人体区域图像,并发送给手势检测装置3;手势检测装置3用于对接收的人体区域图像进行手势检测,并将该手势发送给控制命令确定装置4;控制命令确定装置4用于根据接收的手势确定对应的控制命令,并将该控制命令发送给图像显示控制装置5;图像显示控制装置5用于根据该控制命令控制视频图像在显示设备上的显示。Please refer to Fig. 1, a kind of embodiment of a kind of video image display of the present invention, comprises:camera device 1, humanbody detection device 2,gesture detection device 3, controlcommand determination device 4 and imagedisplay control device 5, whereincamera device 1 is connected to the humanbody detection device 2, the humanbody detection device 2 is connected to thegesture detection device 3, thegesture detection device 3 is connected to the controlcommand determination device 4, and the controlcommand determination device 4 is connected to the imagedisplay control device 5. Wherein, thecamera device 1 is used to collect the real-time scene image in front of the imagedisplay control device 5, and send it to the humanbody detection device 2; To thegesture detection device 3; thegesture detection device 3 is used to perform gesture detection on the received human body region image, and sends the gesture to the controlcommand determination device 4; the controlcommand determination device 4 is used to determine the corresponding control command according to the received gesture , and send the control command to the imagedisplay control device 5; the imagedisplay control device 5 is used to control the display of the video image on the display device according to the control command.

请参考图2,本发明的另一种实施例中,该视频图像显示器还包括:与人体检测装置2相连的背景更新装置6,用于判断当前获取的实时场景图像帧中各像素点是否属于检测到的人体区域图像中的像素点,若是则背景模型保持不变,否则更新背景模型。Please refer to Fig. 2, in another embodiment of the present invention, the video image display also includes: abackground update device 6 connected to the humanbody detection device 2, used to judge whether each pixel in the currently acquired real-time scene image frame belongs to If the detected pixel points in the image of the human body area, the background model remains unchanged, otherwise the background model is updated.

请参考图3a,本发明的一种实施例中,当手势检测装置3检测的手势包括手掌的手形时,手势检测装置3包括:手掌检测单元31、手形特征提取单元32、和手形识别单元33,该手掌检测单元31与手形特征提取单元32相连,用于对人体检测装置2获取的人体区域图像进行手掌目标检测,并获取手掌目标图像,再发送给手形特征提取单元32;手形特征提取单元32与手形识别单元33相连,用于对接收的手掌目标图像进行手形特征提取,并发送给手形识别单元33;手形识别单元33与控制命令确定装置4相连,根据接收的手掌的手形和预先建立的手形分类器进行手形识别,判断该手掌的手形是否为有效手形,若有效,则控制命令确定装置4根据预先建立的手势数据库确定该有效数据库所对应的控制命令。图像显示控制装置5则根据该控制命令切换相应的视频图像或对当前显示的视频图像进行操作,当前显示的视频图像可以是未经用户切换过的视频图像,也可以是刚刚根据用户的手势切换后的视频图像。Please refer to Fig. 3 a, in an embodiment of the present invention, when the gesture detected by thegesture detection device 3 includes the hand shape of the palm, thegesture detection device 3 includes: apalm detection unit 31, a hand shapefeature extraction unit 32, and a handshape recognition unit 33 , thepalm detection unit 31 is connected with the hand shapefeature extraction unit 32, and is used to carry out palm target detection to the human body area image obtained by the humanbody detection device 2, and obtains the palm target image, and then sends to the hand shapefeature extraction unit 32; the hand shapefeature extraction unit 32 is connected with the handshape recognition unit 33, and is used for carrying out hand shape feature extraction to the palm object image that receives, and sends to the handshape recognition unit 33; The hand shape classifier performs hand shape recognition to judge whether the hand shape of the palm is a valid hand shape. If it is valid, the controlcommand determination device 4 determines the control command corresponding to the valid database according to the pre-established gesture database. The imagedisplay control device 5 then switches the corresponding video image according to the control command or operates the currently displayed video image. The currently displayed video image can be a video image that has not been switched by the user, or can be just switched according to the user's gesture. after the video image.

请参考图3b,本发明的另一种实施例中,当手势检测装置3检测的手势包括手掌的手形和手掌的运动轨迹时,该手形检测装置3包括:手掌检测单元31、手形特征提取单元32、手形识别单元33以及与手形识别单元33相连的手掌跟踪单元34。当手形识别单元33判断手掌的手形为有效手形时,则将该手掌标识为激活手掌,并发送给手掌跟踪单元34和控制命令确定装置4;手掌跟踪单元34与控制命令确定装置4相连,用于检测接收的激活手掌的运动轨迹,并确定当前激活手掌的运动类型;控制命令确定装置4则根据该当前激活手掌的运动类型和有效手形,在预先建立的手势数据库中确定对应的控制命令。图像显示控制装置5则根据该控制命令切换相应的视频图像或对当前显示的视频图像进行操作。Please refer to Fig. 3b, in another embodiment of the present invention, when the gesture detected by thegesture detection device 3 includes the shape of the palm and the motion track of the palm, the handshape detection device 3 includes: apalm detection unit 31, a hand shapefeature extraction unit 32. A handshape recognition unit 33 and a palm tracking unit 34 connected to the handshape recognition unit 33 . When the handshape recognition unit 33 judges that the hand shape of the palm is an effective hand shape, then the palm is marked as the activated palm, and is sent to the palm tracking unit 34 and the controlcommand determining device 4; the palm tracking unit 34 is connected with the controlcommand determining device 4, and is used To detect the motion track of the received activated palm, and determine the type of motion of the currently activated palm; the controlcommand determination device 4 determines the corresponding control command in the pre-established gesture database according to the type of motion of the currently activated palm and the effective hand shape. The imagedisplay control device 5 switches the corresponding video image or operates the currently displayed video image according to the control command.

请参考图4,本发明的一种实施例中,手掌检测单元31包括肤色检测模块311、人脸检测模块312和手掌目标获取模块313,其中肤色检测模块311与人脸检测模块312相连,根据人体肤色特征检测所获取的人体区域图像,并提取人脸、手掌和/或手臂区域;人脸检测模块312与手掌目标获取模块313相连,用于从已经获取的区域中将人脸区域检测出来,并将检测结果发送给手掌目标获取模块313,手掌目标获取模块313根据检测结果将人脸区域删除,并获取手掌目标区域图像。Please refer to Fig. 4, in an embodiment of the present invention,palm detection unit 31 comprises skincolor detection module 311, human face detection module 312 and palm object acquisition module 313, wherein skincolor detection module 311 is connected with human face detection module 312, according to The human body region image obtained by human skin color feature detection, and extracting the face, palm and/or arm region; the face detection module 312 is connected with the palm target acquisition module 313, and is used to detect the human face region from the obtained region , and send the detection result to the palm target acquisition module 313, the palm target acquisition module 313 deletes the face area according to the detection result, and acquires an image of the palm target area.

请参考图4,当手势检测装置3检测的手势包括手掌的手形和手掌的运动轨迹时,该手掌目标获取模块313包括手掌识别子模块3131以及与其相连的手掌获取子模块3132,该手掌区域识别子模块3131用于从已经删除人脸区域的手掌和/或手臂区域中,判断该区域是否只包含手掌的区域,如是,则识别手掌目标区域图像,否则将该区域图像识别为手掌和手臂区域图像,并将其发送给手掌获取子模块3132,由手掌获取子模块3132从该手掌和手臂区域图像中获取手掌目标区域图像。Please refer to Fig. 4, when the gesture detected by thegesture detection device 3 includes the shape of the palm and the motion trajectory of the palm, the palm target acquisition module 313 includes a palm recognition submodule 3131 and a palm acquisition submodule 3132 connected thereto, the palm area recognition The sub-module 3131 is used to judge whether the region only contains the palm region from the palm and/or arm region where the face region has been deleted, and if so, then identify the target region image of the palm, otherwise identify the region image as the palm and arm region image, and send it to the palm acquisition sub-module 3132, and the palm acquisition sub-module 3132 acquires the palm target area image from the palm and arm area images.

本发明的另一种实施例中,手掌检测单元31还包括与手掌目标获取模块313相连的手掌目标修正模块314,用于将手掌目标获取模块313获取的手掌目标区域图像进行区域连通性分析,从而得到完整的手掌目标区域图像。In another embodiment of the present invention, thepalm detection unit 31 further includes a palm target correction module 314 connected to the palm target acquisition module 313, for performing regional connectivity analysis on the palm target area image acquired by the palm target acquisition module 313, Thus a complete image of the palm target area is obtained.

基于以上的视频图像显示器,本发明提出一种视频图像显示控制方法。下面结合附图和具体实施例对本方法进行详细的说明。Based on the above video image display, the present invention proposes a video image display control method. The method will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

请参考图5,一种视频图像显示控制方法包括步骤:Please refer to FIG. 5, a video image display control method includes steps:

S1、采集显示设备前的实时场景图像。S1. Collect and display real-time scene images in front of the device.

S2、对该实时场景图像进行人体检测,并获取人体区域图像。S2. Perform human body detection on the real-time scene image, and obtain an image of a human body area.

S3、在该人体区域图像中检测手势。S3. Detect gestures in the human body region image.

S4、确定该手势所对应的控制命令。S4. Determine the control command corresponding to the gesture.

S5、根据该控制命令控制视频图像在显示设备上的显示。S5. Control the display of the video image on the display device according to the control command.

在本发明的一种实施例中,当采集到一帧图像时,还将该图像进行缓存,因此本实施例中采集实时场景图像后还包括:步骤S6将采集的实时场景图像缓存在帧数据缓冲区中。In one embodiment of the present invention, when a frame of image is collected, the image is also cached, so after collecting the real-time scene image in this embodiment, it also includes: step S6 buffering the collected real-time scene image in the frame data in the buffer.

为了能够对图像数据进行很好的控制,从而保证数据采集和处理的流畅,本实施例中的帧数据缓冲区采用了视频流的双缓冲队列技术,从而将帧图像数据存入缓冲区和数据取出缓冲区分开。In order to be able to control the image data very well, so as to ensure the smoothness of data acquisition and processing, the frame data buffer in this embodiment adopts the double buffer queue technology of the video stream, so that the frame image data is stored in the buffer and the data Take out the buffers separately.

在本发明的一种实施例中,为了得到更精确的图像,需要对采集的实时场景图像进行预处理,包括步骤:In an embodiment of the present invention, in order to obtain a more accurate image, it is necessary to preprocess the collected real-time scene image, including steps:

S7、将采集的实时场景图像的颜色空间从RGB转换到HSV。S7. Convert the color space of the collected real-time scene image from RGB to HSV.

为了便于步骤S2中的人体检测,又由于肤色在颜色空间的分布相当集中,但会受到照明和人种的很大影响,为了减少肤色受照明强度影响,因此,本实施例中,将实时场景图像进行颜色空间转换到亮度与色度分离的某个颜色空间,然后放弃亮度分量。In order to facilitate the human body detection in step S2, and because the distribution of skin color in the color space is quite concentrated, but it will be greatly affected by lighting and race, in order to reduce the impact of skin color on the intensity of lighting, therefore, in this embodiment, the real-time scene The image undergoes a color space conversion to a color space that separates luma from chroma, and then discards the luma component.

因为HSV空间是以色彩的色调(Hue,H),饱和度(Saturation,S)和亮度(Value,V)三要素来表示的,属于非线性色彩表示系统。HSV色彩表示方法同人对色彩的感知一致,且在HSV空间中,人对色彩的感知较均匀,因此,HSV空间适合于人的视觉特性的色彩空间,将RGB空间转换为HSV后,使得信息结构更加紧凑,各个分量的独立性增强,颜色信息丢失少。因此,本实施例中采用HSV颜色空间。Because the HSV space is expressed by the three elements of color hue (Hue, H), saturation (Saturation, S) and brightness (Value, V), it belongs to a nonlinear color representation system. The HSV color representation method is consistent with people's perception of color, and in HSV space, people's perception of color is relatively uniform. Therefore, HSV space is suitable for the color space of human visual characteristics. After converting RGB space to HSV, the information structure It is more compact, the independence of each component is enhanced, and the loss of color information is less. Therefore, the HSV color space is adopted in this embodiment.

当然本实施例中的颜色空间模型还可以是其他颜色空间,例如YCbCr等。Of course, the color space model in this embodiment may also be other color spaces, such as YCbCr and the like.

将RGB空间到HSV空间的转换关系如下,设R、G、B在[0,1]之间:The conversion relationship from RGB space to HSV space is as follows, and R, G, and B are set between [0, 1]:

V=Max(R,G,B)V=Max(R,G,B)

Figure BDA0000041550680000081
Figure BDA0000041550680000081

Figure BDA0000041550680000082
Figure BDA0000041550680000082

S8、将进行颜色空间转换后的图像进行了去噪处理,本实施例中采用中值滤波的方式对该图像进行去噪处理S8. The image after the color space conversion is denoised. In this embodiment, the image is denoised by means of median filtering.

由于步骤S1采集的实时场景图像中存在噪声等,因此,为了更好的得到的图像,需要将该图像进行去噪处理。Since there are noises and the like in the real-time scene image collected in step S1, in order to obtain a better image, the image needs to be denoised.

请参考图6,在本发明的一种实施例中,步骤S2中人体检测,并获取人体区域图像包括步骤:Please refer to Fig. 6, in one embodiment of the present invention, human body detection in step S2, and obtaining human body area image comprises steps:

S21、将当前获取的实时场景图像帧与根据背景模型所得的参考图像进行像素级的减除操作,得到差分图像。S21. Perform a pixel-level subtraction operation on the currently acquired real-time scene image frame and the reference image obtained according to the background model to obtain a difference image.

S22、将该差分图像进行二值化处理,得到二值化差分图像。S22. Perform binarization processing on the difference image to obtain a binarized difference image.

S23、对该二值化差分图像进行形态学处理。S23. Perform morphological processing on the binary difference image.

在某些情况下,如摄像机拍摄的方向与人体运动方向基本一致时,得到的初步的差分二值化图像中包含了一些黑洞和噪声点,因此需要将初步得到的差分二值化图像做形态学处理。In some cases, if the direction of the camera shooting is basically the same as the direction of human body movement, the obtained preliminary differential binary image contains some black holes and noise points, so it is necessary to make the preliminary differential binary image Learn to deal with.

在本发明的一种实施例中,步骤S23形态学处理包括:采用腐蚀操作去除该对二值化差分图像中孤立的噪声点,采用膨胀操作填充该二值化差分图像中空洞部分。其中,腐蚀操作和膨胀操作的结构元素取长宽分别为3的十字形结构元素。In one embodiment of the present invention, the morphological processing in step S23 includes: removing isolated noise points in the pair of binarized difference images by using an erosion operation, and filling holes in the pair of binarized difference images by using a dilation operation. Among them, the structural elements of the erosion operation and the expansion operation are cross-shaped structural elements whose length and width are 3 respectively.

S24、将符合预定连通规则的二值化差分图像进行连通处理,从而获取连通区域。S24. Connecting the binarized difference images conforming to the predetermined connection rules, so as to obtain connected regions.

由于进行二值化处理后的图像中包含了一些零散的区域或者像素点,因此,需要通过区域连通性分析将符合预定规则的图像进行连通处理。在本实施例中,步骤S24中预定连通规则采用8-连通规则,当然该预定规则还可以是其他连通规则,例如4-连通规则。Since the binarized image contains some scattered regions or pixels, it is necessary to connect the images conforming to the predetermined rules through regional connectivity analysis. In this embodiment, the predetermined connection rule in step S24 adopts the 8-connection rule, of course, the predetermined rule may also be other connection rules, such as the 4-connection rule.

S25、判断各个连通区域的面积内像素个数总和是否小于设定阈值,如是则将该连通区域视为噪声区,并删除该连通区域,则最后留下的所有连通区域组成的区域图像即为人体区域图像,输出该人体区域图像。其中设定阈值可根据经验设置。S25. Judging whether the sum of the number of pixels in the area of each connected region is less than the set threshold, if so, the connected region is regarded as a noise region, and the connected region is deleted, then the region image composed of all connected regions left at last is A human body region image, outputting the human body region image. Wherein the set threshold can be set according to experience.

由于在直接检测手势过程中,提取的手掌区域图像中往往会存在噪声,并且这些噪声很接近手掌,从而影响对手势的判断。为了得到更加精确的手势,本发明采用了首先进行了人体检测,再检测手势,从而在进行人体检测过程中将噪声去除,使得检测出的手势更加精确。Because in the process of directly detecting gestures, there are often noises in the extracted palm area images, and these noises are very close to the palms, thus affecting the judgment of gestures. In order to obtain more accurate gestures, the present invention first detects the human body, and then detects the gestures, thereby removing noise during the human body detection process, so that the detected gestures are more accurate.

由于人体可能在不断地运动,相对于每一次采集的场景图像其背景图像也在改变,为了得到更加精确的背景图像,就需要将背景模型进行更新。Since the human body may be constantly moving, the background image is also changing with respect to each captured scene image. In order to obtain a more accurate background image, the background model needs to be updated.

因此,在本发明的另一种实施例中,步骤S2还包括步骤:Therefore, in another embodiment of the present invention, step S2 also includes the steps of:

S26、判断当前获取的实时场景图像中个像素点是否属于检测到的人体区域中的像素点,如是则背景模型保持不变,否则更新背景模型。S26. Determine whether a pixel in the currently acquired real-time scene image belongs to a pixel in the detected human body area, if so, keep the background model unchanged, otherwise update the background model.

请参考图7,本发明的一种实施例中,步骤S21包括步骤:Please refer to FIG. 7, in an embodiment of the present invention, step S21 includes steps:

S211、获取预处理后的图像。S211. Acquire a preprocessed image.

S212、判断当前背景模型是否已经建立,如是则执行步骤S213,否则执行步骤S214。S212 , judging whether the current background model has been established, if yes, execute step S213 , otherwise execute step S214 .

S213、将当前获取的预处理后的帧图像fk(x,y)中各像素点的像素值,与对应于根据背景模型所得的背景参考图像bk(x,y)中各像素点的像素值进行减除操作,得到差分图像Dk(x,y),则有Dk(x,y)=|fk(x,y)-bk(x,y)|。S213. Compare the pixel value of each pixel in the currently acquired preprocessed frame image fk (x, y) with the pixel value corresponding to each pixel in the background reference image bk (x, y) obtained according to the background model The pixel value is subtracted to obtain the differential image Dk (x, y), then Dk (x, y)=|fk (x, y)-bk (x, y)|.

S214、为每个像素点建立一个用单高斯分布表示的模型B=[μ,δ2],其中μ为均值,δ2为方差。S214. Establish a model B=[μ, δ2 ] represented by a single Gaussian distribution for each pixel, where μ is the mean value and δ2 is the variance.

S215、输出差分图像。S215. Output the differential image.

在本发明的一种实施例中,步骤S22中将差分图像进行二值化处理为:In one embodiment of the present invention, in step S22, the difference image is binarized as follows:

S221、预先设置一个图像分割阈值T=kδ,将差分图像的每点的像素值和预设阈值进行比较,预设阈值可根据经验设置,或根据已有的自适应算法计算出来。本实施例中阈值T设为当前像素点像素值的标准差的3倍大小。S221. Preset an image segmentation threshold T=kδ, and compare the pixel value of each point of the differential image with a preset threshold. The preset threshold can be set according to experience, or calculated according to an existing adaptive algorithm. In this embodiment, the threshold T is set to be three times the standard deviation of the pixel value of the current pixel point.

S222、将差分图像中的各个像素点的像素值与该分割阈值T进行比较,并根据比较结果来将该差分图像进行分割,从而得到二值化差分图像S222. Compare the pixel value of each pixel in the differential image with the segmentation threshold T, and segment the differential image according to the comparison result, so as to obtain a binarized differential image

Mmkk((xx,,ythe y))==11foregroundforegroundDD.kk((xx,,ythe y))>>TT00backgroundbackgroundotherwiseotherwise..

本实施例中采用当前像素点的像素值大于该阈值T,则将其像素值设置为1;当前像素点的像素值小于等于该阈值T,则将其像素值设置为0,从而将差分图像进行了二值化,即得到二值化差分图像。In this embodiment, if the pixel value of the current pixel is greater than the threshold T, its pixel value is set to 1; if the pixel value of the current pixel is less than or equal to the threshold T, its pixel value is set to 0, so that the difference image Binarization is performed, that is, a binarized difference image is obtained.

当然,本实施例中的可以将像素值大于阈值的像素点设置为0,将像素值小于等于阈值的像素点设置为1也是可以的。Of course, in this embodiment, the pixel points whose pixel values are greater than the threshold can be set to 0, and it is also possible to set the pixel points whose pixel values are less than or equal to the threshold to 1.

请参考图8,本发明的一种实施例中,步骤S24中对二值化差分图像进行连通区域性分析包括步骤:Please refer to FIG. 8 , in an embodiment of the present invention, in step S24, performing connectivity regional analysis on the binarized difference image includes steps:

S241、按照从上至下,从左至右的顺序对当前二值化差分图像进行扫描。S241. Scan the current binarized difference image in order from top to bottom and from left to right.

S242、判断当前像素点是否为前景点,如是,则将其标记为一个新的ID,否则执行步骤S241。S242. Determine whether the current pixel point is a foreground point, if yes, mark it as a new ID, otherwise execute step S241.

这里的前景点为对应于当前真实场景中由人体运动的出现所引起的像素值变化的像素点。Here, the foreground point is a pixel point corresponding to a pixel value change caused by human body motion in the current real scene.

S243、判断该前景点的8-连通方向上的像素点是否为前景点,如是,则将其标记为相同的ID,并加入栈Stack。S243. Determine whether the pixel point in the 8-connected direction of the foreground point is a foreground point, if so, mark it as the same ID, and add it to the stack.

S244、判断完上述8个像素点后,检查栈是否为空,若非空则将栈顶元素弹出,若为空则结束扫描,并执行步骤S246。S244. After judging the above-mentioned 8 pixels, check whether the stack is empty, if not, pop the top element of the stack, and if it is empty, end the scanning, and execute step S246.

S245、对弹出的像素点继续上面的8-连通判断,不断地重复上面的过程,直至栈为空,得到了具有相同ID的前景区域。S245. Continue the above 8-connectivity judgment for the popped-up pixels, and repeat the above process until the stack is empty, and the foreground area with the same ID is obtained.

S246、当整个图像扫描结束后,就得到了所有的连通区域,并且每个连通区域都拥有唯一的标识ID。S246. After scanning the entire image, all connected regions are obtained, and each connected region has a unique identification ID.

请参考图9,本实施例中的步骤S26更新背景模型包括步骤:Please refer to FIG. 9, step S26 in this embodiment updates the background model and includes steps:

S261、获取前景掩膜,即获取像素值为1的像素点。S261. Acquire a foreground mask, that is, acquire a pixel with a pixel value of 1.

S262、判断该像素点是否为属于步骤S26中获取的人体区域中的像素点,如是,则执行步骤S263,否则执行步骤S264。S262. Determine whether the pixel point belongs to the pixel point in the human body area obtained in step S26, if yes, execute step S263, otherwise execute step S264.

S263、保持背景像素点的统计模型的参数不变。本实施例中设当前帧图像为Ii,α为学习速率,μ为均值,δ为标准差,其背景更新公式为:S263. Keep the parameters of the statistical model of the background pixels unchanged. In this embodiment, it is assumed that the current frame image is Ii , α is the learning rate, μ is the mean value, and δ is the standard deviation, and the background update formula is:

μi+1=μiμi+1 = μi

δδii++1122==δδii22..

S264、对背景像素点的统计模型的参数进行更新,则有背景更新公式:S264. To update the parameters of the statistical model of the background pixels, there is a background update formula:

μi+1=(1-α)μi+αIiμi+1 =(1-α)μi +αIi

δδii++1122==((11--αα))δδii22++αα((IIii--μμii))22,,

其中,在本实施例中学习速率α可以设为0.002,当然也可以设为其他值。Wherein, in this embodiment, the learning rate α can be set to 0.002, and of course it can also be set to other values.

请参考图10,在本发明的一种实施例中,当步骤S3中的手势包括手掌的手形时,步骤S3包括:Please refer to FIG. 10, in one embodiment of the present invention, when the gesture in step S3 includes the hand shape of the palm, step S3 includes:

S31、对已经获取人体区域进行手掌目标检测,并获取手掌目标区域图像。S31. Perform palm target detection on the acquired human body area, and acquire an image of the palm target area.

S32、对该手掌目标区域图像进行手形特征提取。S32. Perform hand shape feature extraction on the palm target region image.

S33、根据该提取的手形特征和预先建立的手形分类器进行手形识别,判断该手掌的手形是否为有效手形,则执行步骤S4。S33. Carry out hand shape recognition according to the extracted hand shape feature and the pre-established hand shape classifier, and judge whether the hand shape of the palm is a valid hand shape, and then execute step S4.

请参考图11,本发明的一种实施例中,步骤S31中进行手掌目标检测并获取手掌目标区域图像包括步骤:Please refer to FIG. 11 , in an embodiment of the present invention, performing palm target detection and obtaining the palm target area image in step S31 includes steps:

S311、对获取的人体区域图像进行肤色检测,获取包含人脸、手掌或手臂区域图像。S311. Perform skin color detection on the obtained human body region image, and obtain an image including a human face, palm or arm region.

因为人体皮肤的色调分布在一定的范围内,可以通过肤色特征将人脸和手臂手掌部分从人体区域中提取出来。Because the color tone of human skin is distributed within a certain range, the face, arm and palm part can be extracted from the human body area through the skin color feature.

由于肤色在颜色空间的分布相当集中,但会受到照明和人种的很大影响,为了减少肤色受照明强度影响,因此,本实施例在步骤S6中将场景图像进行了图像颜色空间转换为HSV,从而将亮度与色度分离。同时,为避免同一镜头内亮度变化以及其他引起的亮度变化的影响,因而本实施例中,在步骤S311中进行肤色检测时放弃亮度分量,只选择图像的H分量作为检测依据。Since the distribution of skin color in the color space is quite concentrated, it will be greatly affected by lighting and race. In order to reduce the impact of skin color on the intensity of lighting, the present embodiment converts the image color space of the scene image into HSV in step S6. , thereby separating luminance from chrominance. At the same time, in order to avoid the influence of brightness changes in the same shot and other caused brightness changes, in this embodiment, the brightness component is discarded when performing skin color detection in step S311, and only the H component of the image is selected as the detection basis.

再根据肤色在H分量上聚类性分割肤色像素,即根据统计分析定出HSV空间的阈值,并根据该阈值进行肤色区域的分割,从而将人脸、手掌和/或手臂区域区分出来。Then according to the skin color, the skin color pixels are clustered on the H component, that is, the threshold value of the HSV space is determined according to the statistical analysis, and the skin color area is segmented according to the threshold value, so as to distinguish the face, palm and/or arm area.

S312、从上述区域图像中选取一个区域。S312. Select an area from the above area images.

S313、根据预先建立的人脸模型对该区域进行人脸检测,如检测出人脸则将该区域丢弃,并执行步骤S314,否则输出该手掌和/或手臂区域图像,并执行步骤S315。S313. Perform face detection on the region according to the pre-established face model, discard the region if a face is detected, and execute step S314; otherwise, output the palm and/or arm region image, and execute step S315.

S314、判断是否还有待检测区域,如是,则执行步骤S313,否则结束操作。S314, judging whether there is still an area to be detected, if yes, execute step S313, otherwise end the operation.

S315、如果判断该区域图像的长宽比不大于2,则判定该区域图像为手掌目标区域图像,并执行步骤S317;否则判定该区域为手掌和手臂区域图像,并执行步骤S316。S315. If it is determined that the aspect ratio of the area image is not greater than 2, then determine that the area image is the palm target area image, and perform step S317; otherwise, determine that the area is the palm and arm area image, and perform step S316.

S316、采用手掌定位算法对该手掌和手臂区域中的手掌进行定位,并获取该手掌区域。S316. Use a palm location algorithm to locate the palm and the palm in the arm area, and obtain the palm area.

为了获得完整的手掌区域图像,本发明的一种实施例中,步骤S315中还包括:In order to obtain a complete palm area image, in an embodiment of the present invention, step S315 also includes:

当判定为手掌区域图像时,则执行步骤S318:对该手掌区域图像进行区域连通性分析,从而获得完整的手掌区域图像,再执行步骤S317;When it is determined to be a palm region image, then perform step S318: perform regional connectivity analysis on the palm region image, thereby obtaining a complete palm region image, and then perform step S317;

当判定为手掌和手臂区域图像时,在执行步骤S316之前执行步骤S318,即对该手掌和手臂区域图像进行区域连通性分析,从而获得完整手掌和手臂区域图像。When it is judged to be the image of the palm and arm region, step S318 is executed before step S316, that is, the regional connectivity analysis is performed on the image of the palm and arm region, so as to obtain a complete image of the palm and arm region.

在本实施例中,该连通区域分析采用8连通规则进行连通处理:判断原始帧图像中的种子点坐标上的像素与周围8个相邻像素点的H分量的值是否小于设定阈值,如是,则被视为属于同一类像素,加入到连通区域中,得到完整的手掌和/或手臂区域图像。In this embodiment, the connected region analysis adopts the 8-connected rule to carry out connected processing: judge whether the value of the H component of the pixel on the seed point coordinates in the original frame image and the surrounding 8 adjacent pixel points is less than the set threshold, if so , are considered to belong to the same type of pixels, and added to the connected area to obtain a complete image of the palm and/or arm area.

在本实例中采用了人脸检测来删除人脸区域。其中人脸检测包括两种方法:In this example, face detection is used to delete the face area. Among them, face detection includes two methods:

一是基于知识的人脸检测方法:通过检测出不同人脸面部特征的位置,然后根据一些知识规则来定位人脸,因为人脸的局部特征的分布总是存在一定的规律,例如眼睛总是在对称分布在人脸上半部分等,所以可以利用一组描述人脸局部特征分布的规则来进行人脸检测、自上而下和自下而上的两种检测策略。One is the knowledge-based face detection method: by detecting the positions of different facial features, and then locating the face according to some knowledge rules, because there is always a certain law in the distribution of local features of the face, for example, the eyes are always In the symmetrical distribution on half of the face, etc., a set of rules describing the distribution of local features of the face can be used for face detection, top-down and bottom-up detection strategies.

二是基于表象的方法:由于人脸具有统一的结构模式,并且分类器的实现可以采用不同的策略,如采用神经网络的方法和传统的统计方法等。因此,首先通过学习,在大量训练样本集的基础上建立一个能对人脸和非人脸样本进行正确识别的分类器,然后检测图像进行全局扫描,用分类器检测扫描到的图像窗口是否包含人脸,若有,则给出人脸所在的位置。The second is the representation-based method: since the face has a unified structural pattern, and the implementation of the classifier can adopt different strategies, such as the method of using neural networks and traditional statistical methods. Therefore, firstly, through learning, a classifier that can correctly identify face and non-face samples is established on the basis of a large number of training sample sets, and then the image is detected for global scanning, and the classifier is used to detect whether the scanned image window contains If there is a face, give the location of the face.

在本发明的一种实施例中,人脸检测采用了基于表象的方法,包括:S313a、离线采集大量人脸图像样本;S313b、再提取人脸的多维特征向量,并采用PCA方法(Principal Component Analysis,主成分分析)降维;S313c、利用提取的该特征向量对神经网络进行训练得到人脸分类器;S313d、再根据上述特征向量在人脸分类器中对该人体区域图像进行人脸检测;S313e、如检测为人脸,则将人脸区域删除,从而得到手掌和/或手臂区域图像。In one embodiment of the present invention, the face detection adopts a method based on appearance, including: S313a, collecting a large number of face image samples offline; S313b, extracting the multidimensional feature vector of the face, and using the PCA method (Principal Component Analysis, principal component analysis) dimensionality reduction; S313c, using the feature vector extracted to train the neural network to obtain a face classifier; S313d, then performing face detection on the human body region image in the face classifier according to the above feature vector ; S313e, if it is detected as a human face, delete the human face area, so as to obtain an image of the palm and/or arm area.

请参考图12,本发明的一种实施例中,步骤S316中进行手掌定位并获取包括步骤:Please refer to FIG. 12 , in an embodiment of the present invention, performing palm positioning in step S316 and obtaining includes steps:

S316a、采用Canny算子对该手掌和手臂区域图像进行边缘检测,获取边缘信息,得到区域轮廓,如图14a所示。S316a. Using the Canny operator to perform edge detection on the image of the palm and arm region, acquire edge information, and obtain a region outline, as shown in FIG. 14a.

S316b、对该区域轮廓进行最小外接椭圆拟合,获得该外接椭圆信息,包括:长轴、短轴、与水平轴的夹角angle,如图14b所示。S316b. Perform minimum circumscribing ellipse fitting on the contour of the area to obtain the circumscribing ellipse information, including: major axis, minor axis, and angle angle with the horizontal axis, as shown in FIG. 14b.

S316c、根据该外接椭圆的长轴和与水平轴的夹角angle获取该区域轮廓的方向信息,从而最终获取其中的手臂和手掌的指向,如图14c所示。S316c. Obtain direction information of the contour of the region according to the major axis of the circumscribing ellipse and the angle angle between it and the horizontal axis, so as to finally obtain the orientation of the arm and palm therein, as shown in FIG. 14c.

S316d、通过图像几何空间坐标变换对该已获得指向的区域进行图像矫正,使得手臂和手掌指向为竖直向上,如图14d所示。S316d. Perform image correction on the obtained pointing area through image geometric space coordinate transformation, so that the arm and palm point vertically upward, as shown in FIG. 14d.

S316e、对矫正后的手臂和手掌区域进行手掌定位检测,并获取手掌目标区域图像。S316e. Perform palm location detection on the corrected arm and palm area, and acquire an image of the palm target area.

如图14e和图14f所示,本实施例中采用手掌定位算法对手掌进行定位,具体为:对该手掌和手臂区域的边缘像素进行垂直方向上的投影操作,找到手掌所在端;再对该手掌和手臂区域的所有像素进行垂直方向上的投影操作,并从手掌所在端开始寻找投影坐标轴上的峰值点;将该峰值点后出现的谷点作为手臂与手掌的分割点;根据该分割点对该手掌和手臂区域进行垂直方向上的分割,从而去除手臂获取手掌部分,即获得手掌目标区域图像。As shown in Figure 14e and Figure 14f, in this embodiment, the palm positioning algorithm is used to locate the palm, specifically: perform a vertical projection operation on the edge pixels of the palm and arm area to find the end where the palm is located; All pixels in the palm and arm area are projected in the vertical direction, and the peak point on the projection coordinate axis is found from the end where the palm is located; the valley point that appears after the peak point is used as the segmentation point between the arm and the palm; according to the segmentation Segment the palm and arm area in the vertical direction, thereby removing the arm to obtain the palm part, that is, to obtain the image of the palm target area.

S317、输出手掌目标区域图像,并执行步骤S32对该手掌目标区域图像进行手形特征提取。S317. Output the image of the palm target area, and execute step S32 to extract hand shape features from the palm target area image.

请参考图10,在本发明的一种实施例中,如果步骤S3中的手势包括手掌的手形时,则进行手形特征提取后,步骤S33包括:根据提取的手形特征和预先建立的手形分类器进行手形识别,判断该手掌的手形是否有效,如有效则执行步骤S4:根据预先建立的手势数据库确定该有效手形对应的控制命令,否则丢弃该手形。Please refer to Fig. 10, in one embodiment of the present invention, if the gesture in step S3 includes the hand shape of the palm, after hand shape feature extraction, step S33 includes: according to the hand shape feature extracted and the hand shape classifier established in advance Carry out hand shape recognition to determine whether the palm shape is valid, and if it is valid, execute step S4: determine the control command corresponding to the valid hand shape according to the pre-established gesture database, otherwise discard the hand shape.

请参考图13,在本发明的另一种实施例中,如果步骤S3中的手势包括手掌的手形和手掌的运动轨迹时,进行手形特征提取后,步骤S33还包括:将该手掌标识为激活手掌,并跟踪当前激活手掌的运动轨迹,以确定当前激活手掌的运动类型。Please refer to FIG. 13 , in another embodiment of the present invention, if the gesture in step S3 includes the shape of the palm and the motion track of the palm, after the feature extraction of the hand shape, step S33 also includes: identifying the palm as active palm, and track the motion trajectory of the currently activated palm to determine the motion type of the currently activated palm.

当判定当前手掌的手形有效时,则执行步骤S4:在预先建立的手势数据库中,根据当前激活手掌的运动类型确定相应的控制命令。When it is determined that the current palm shape is valid, step S4 is performed: in the pre-established gesture database, the corresponding control command is determined according to the motion type of the currently activated palm.

最后执行步骤S5:根据确定的控制命令切换相应的视频图像或者对当前视频图像进行操作。Finally, step S5 is executed: switch the corresponding video image or operate the current video image according to the determined control command.

本发明的一种实施例中,手势分为静止和运动,当为静止的手势时,则根据有效手形来获取相应的控制命令;当手势为运动的则需要先确定其运动类型,然后根据手掌的运动类型和/或有效手形来获取相应的控制命令。其中,运动又包括了向上、向下、向左或者向右等。In one embodiment of the present invention, gestures are divided into static and motion. When the gesture is static, the corresponding control command is obtained according to the effective hand shape; motion type and/or effective hand shape to obtain corresponding control commands. Wherein, the movement includes up, down, left or right and so on.

有效手形:N1、左五指手掌、右五指手掌,如图15c所示;N2、左五指手掌、右拳头,如图15d所示;N3、左五指手掌、右一指手掌,如图15e所示;N4、左一指手掌、右五指手掌,如图15f所示。Valid hand shapes: N1, left five-finger palm, right five-finger palm, as shown in Figure 15c; N2, left five-finger palm, right fist, as shown in Figure 15d; N3, left five-finger palm, right one-finger palm, as shown in Figure 15e ; N4, the palm of the left one finger and the palm of the right five fingers, as shown in Figure 15f.

运动类型:向左运动包括M1、单只五指手掌向左移动,如图15b所示;向右运动包括M2、单只五指手掌向右移动,如图15a所示;向左向右运动包括M3、左五指手掌向左移动,右五指手掌向右移动,如图15g所示,以及M4、左五指手掌向右移动,右五指手掌向左移动,如图15h所示;动静结合:NAM、左五指手掌静止不动、右一指手掌移动,如图15i所示。Movement type: leftward movement includes M1, a single five-fingered palm moves to the left, as shown in Figure 15b; rightward movement includes M2, a single five-fingered palm moves to the right, as shown in Figure 15a; leftward and rightward movement includes M3 , the palm of the left five fingers moves to the left, the palm of the right five fingers moves to the right, as shown in Figure 15g, and M4, the palm of the left five fingers moves to the right, and the palm of the right five fingers moves to the left, as shown in Figure 15h; combination of movement and static: The palm of the five fingers is still and the palm of the right finger is moving, as shown in Figure 15i.

当然本实施例中运动类型还可以是其他的。Of course, the motion type in this embodiment can also be other types.

如图13所示,本发明的一种实施例中,其中手形分类器的建立和训练包括:离线采集大量手形图像样本集;提取其中的手形特征;再利用获得的手形特征对神经网络进行训练获得手形的分类器。As shown in Figure 13, in an embodiment of the present invention, wherein the establishment and training of the hand shape classifier include: collecting a large number of hand shape image sample sets offline; extracting hand shape features wherein; and then using the hand shape features obtained to train the neural network Obtain a hand-shaped classifier.

本实施例中,上述的每个样本集为代表不同的手形的图像模板;上述的手形特征包括:手形轮廓、手形曲度、手形周长、手形面积、手形凸凹度,手形边缘垂直投影、手形边缘水平投影。当然本实施例中的手形特征还可以是其他的特征。本实施例中的神经网络采用了三层神经网络模型,当然也可以使用其他神经网络模型。In this embodiment, each of the above-mentioned sample sets is an image template representing a different hand shape; the above-mentioned hand shape features include: hand shape outline, hand shape curvature, hand shape perimeter, hand shape area, hand shape convexity, hand shape edge vertical projection, hand shape The edge is projected horizontally. Of course, the hand-shaped feature in this embodiment can also be other features. The neural network in this embodiment adopts a three-layer neural network model, and of course other neural network models can also be used.

请参考图16,本发明的一种实施例中,步骤S4确定控制命令包括:Please refer to FIG. 16, in an embodiment of the present invention, step S4 determines the control command includes:

S41、获取步骤S3标识的激活手掌的运动类型。S41. Obtain the motion type of the activated palm identified in step S3.

S42、根据该激活手掌的运动类型,在预先建立的手形手势数据库中查找对应的手势,若在该数据库中成功找到对应的手势,则获取与该手势相对应的命令,否则不做任何动作。该命令包括该手势要完成的操作和操作的对象。S42. According to the motion type of the activated palm, search for the corresponding gesture in the pre-established hand gesture database, if the corresponding gesture is successfully found in the database, obtain the command corresponding to the gesture, otherwise do not take any action. The command includes the operation to be completed by the gesture and the object of the operation.

S43、判断该操作对象为视频动画文件或是图片文件,若为视频动画文件,则执行步骤S44,若为图像文件则执行步骤S45。S43. Determine whether the operation object is a video animation file or a picture file, if it is a video animation file, execute step S44, and if it is an image file, execute step S45.

S44、理解并解释该手势,并输出相应的控制命令,例如:S44. Understanding and explaining the gesture, and outputting a corresponding control command, for example:

若当前激活手掌的手势为M1时,则其对应于手势数据库中的控制命令为切换为播放上一个视频动画文件,并输出相应的控制命令;If the gesture of currently activating the palm is M1, then its corresponding control command in the gesture database is to switch to playing the last video animation file, and output the corresponding control command;

若当前激活手掌的手势为M2时,则该手势理解为播放下一个视频动画文件,并输出相应的控制命令;If the gesture currently activating the palm is M2, the gesture is interpreted as playing the next video animation file and outputting the corresponding control command;

若当前激活手掌的手势为N1时,则该手势理解为播放当前视频动画文件,并输出相应的控制命令;If the gesture currently activating the palm is N1, the gesture is interpreted as playing the current video animation file and outputting the corresponding control command;

若当前激活手掌的手势为N2时,则该手势理解为暂停播放当前视频动画文件,并输出相应的控制命令;If the gesture currently activating the palm is N2, the gesture is understood as pausing the current video animation file and outputting the corresponding control command;

若当前激活手掌的手势为N3时,则该手势理解为快进播放当前视频动画文件,并输出相应的控制命令;If the gesture currently activating the palm is N3, the gesture is interpreted as fast forwarding and playing the current video animation file, and the corresponding control command is output;

若当前激活手掌的手势为N4时,则该手势理解为快退播放当前视频图像,并输出相应的控制命令。If the current gesture of activating the palm is N4, the gesture is interpreted as rewinding and playing the current video image, and a corresponding control command is output.

S45、理解并解释该手势,并输出相应的控制命令,例如:S45. Understanding and explaining the gesture, and outputting a corresponding control command, for example:

若当前激活手掌的手势为M1时,则该手势理解为显示上一张图片,输出相应的控制信号;If the gesture currently activating the palm is M1, the gesture is interpreted as displaying the previous picture and outputting the corresponding control signal;

若当前激活手掌的手势为M2时,则该手势理解为显示下一张图片,输出相应的控制命令;If the gesture currently activating the palm is M2, the gesture is interpreted as displaying the next picture and outputting the corresponding control command;

若当前激活手掌的手势为M3时,则该手势理解为放大图片,输出相应的控制命令;If the gesture currently activating the palm is M3, the gesture is interpreted as zooming in on the picture, and the corresponding control command is output;

若当前激活手掌的手势为M4时,则该手势理解为缩小图片,输出相应的控制命令;If the gesture currently activating the palm is M4, the gesture is interpreted as zooming out of the picture, and the corresponding control command is output;

若当前激活手掌的手势为NAM时,则该手势理解为移动图片,输出相应的控制命令。If the gesture currently activating the palm is NAM, the gesture is interpreted as moving a picture, and a corresponding control command is output.

通过本发明的视频图像显示控制方法,用户只需要做出相应的手势,包括静止或者运动的手势,以选择需要的视频图像进行显示,或者对当前显示视频图像进行操作,使得用户与视频图像显示器之间实现了主动交互,提高了视频图像与用户之间的交互效率。Through the video image display control method of the present invention, the user only needs to make corresponding gestures, including still or moving gestures, to select the desired video image for display, or to operate the currently displayed video image, so that the user and the video image display Active interaction is realized between them, which improves the interaction efficiency between video images and users.

上述一种视频图像显示控制方法可用于视频广告图片或动画的显示,也可用于其它图片或动画的显示。The above-mentioned video image display control method can be used for displaying video advertisement pictures or animations, and can also be used for displaying other pictures or animations.

以上内容是结合具体的实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。The above content is a further detailed description of the present invention in conjunction with specific embodiments, and it cannot be assumed that the specific implementation of the present invention is limited to these descriptions. For those of ordinary skill in the technical field of the present invention, without departing from the concept of the present invention, some simple deduction or replacement can be made, which should be regarded as belonging to the protection scope of the present invention.

Claims (11)

1. a video image display control method is characterized in that, comprises step:
Real-time scene image before A, the collection display device;
B, described real-time scene image is carried out human detection, and obtain the human region image;
C, in described human region image, detect gesture;
D, determine the pairing control command of described gesture;
E, according to the demonstration of described control command control of video image on display device.
2. the method for claim 1 is characterized in that, described step B comprises: thus the current real-time scene picture frame that obtains is detected the human region image with comparing according to the reference picture of background model gained.
3. method as claimed in claim 2 is characterized in that, the step of described human body area image comprises:
Current real-time scene picture frame that obtains and the reference picture according to the background model gained are carried out the subduction operation of Pixel-level, obtain difference image;
Described difference image is carried out binary conversion treatment, obtain the binaryzation difference image;
Described binaryzation difference image is carried out morphology to be handled;
To meet the predetermined binaryzation difference image that is communicated with rule and be communicated with processing, obtain connected region;
Judge whether each connected region is the noise range, if then deletion;
The area image that to be made up of all connected regions that stay at last is as the human region image, and exports described human region image.
4. as claim 2 or 3 described methods, it is characterized in that, also comprise step: judge whether each pixel in the current real-time scene picture frame that obtains belongs to the pixel in the detected human region, if then background model remains unchanged, otherwise update background module.
5. as each described method in the claim 1 to 4, it is characterized in that described gesture comprises the hand shape of palm, described step C comprises:
Described human region image is carried out the palm target detection, and obtain the palm target area image;
Described palm target area image is carried out hand-shaped characteristic to be extracted;
Hand shape sorter according to the hand-shaped characteristic of the described palm that extracts and foundation in advance carries out the identification of hand shape, judges whether the hand shape of described palm is effective hand shape;
In step D, when the hand shape of judging this palm is effective hand shape, determine the control command of this effective hand shape correspondence according to the gesture database of setting up in advance; Or
Described gesture comprises the hand shape of palm and the movement locus of palm, and described step C comprises:
Described human region image is carried out the palm target detection, and obtain the palm target area image;
Described palm target area image is carried out hand-shaped characteristic to be extracted;
Carry out the identification of hand shape according to the hand-shaped characteristic of this palm that extracts and the hand shape sorter set up in advance, judge whether the hand shape of this palm is effective hand shape, when the hand shape of this palm of judgement is effective hand shape, this palm is designated current activation palm;
Detect the movement locus of current activation palm, determine the type of sports of current activation palm;
In step D, in the gesture database of setting up in advance, determine corresponding control command according to the type of sports of this effective hand shape and current activation palm;
In step e, switch corresponding video image or current video image displayed is operated according to described control command.
6. method as claimed in claim 5 is characterized in that, the step that described human region image is carried out the palm target detection comprises:
Described human region image section is carried out Face Detection, obtain and comprise people's face, arm or palm area image;
Obtain the area image of arm and/or palm according to the Face Detection model of setting up in advance;
In the area image of arm and/or palm, detect palm.
7. method as claimed in claim 6 is characterized in that, the described step that detects palm in the area image of arm and/or palm comprises:
Whether the length breadth ratio of area image of judging described arm and/or palm is greater than 2, if judge that then this zone is arm and palm area image, otherwise be the palm area image;
When being judged to be arm and palm area image, described arm and palm area image are carried out rim detection, obtain marginal information, obtain region contour;
Described region contour is carried out minimum external ellipse fitting, obtain the information of described external ellipse;
According to the information of described external ellipse, obtain the directional information of described region contour, thereby finally obtain the sensing of described arm and palm;
Described arm regions image and the palm area image that obtains sensing carried out the image rectification, make arm and palm be oriented to straight up;
On described arm after the rectification and palm area image, carry out the palm detection and localization, obtain the palm target area image.
8. video image display is characterized in that comprising:
Camera head is used to gather the preceding real-time scene image of display device;
Human body detection device is used for described real-time scene image is carried out human detection, obtains the human region image;
Hand gesture detecting device is used for detecting gesture at described human region image;
Control command is determined device, is used for determining the pairing control command of gesture;
Image display control apparatus is used for according to the demonstration of described control command control of video image on display device.
9. video image display as claimed in claim 8 is characterized in that, thereby described human body detection device is used for the current real-time scene picture frame that obtains is detected the human region image with comparing according to the reference picture of background model gained.
10. video image display as claimed in claim 8 or 9, it is characterized in that also comprising the context update device, it is used for judging whether current each pixel of real-time scene picture frame that obtains belongs to the pixel in the detected human region, if then background model remains unchanged, otherwise update background module.
11. as each described video image display in the claim 8 to 10, it is characterized in that described gesture comprises the hand shape of palm, described hand gesture detecting device comprises:
The palm detecting unit is used for described human region image is carried out the palm target detection, and obtains the palm target area image;
The hand-shaped characteristic extraction unit is used for that described palm target area image is carried out hand-shaped characteristic and extracts;
Hand shape recognition unit is used for carrying out the identification of hand shape according to the hand shape sorter of the hand-shaped characteristic of this palm that extracts and foundation in advance, judges whether the hand shape of this palm is effective hand shape;
Described control command determines that device when the hand shape of judging this palm is effective hand shape, determines the control command of this effective hand shape correspondence according to the gesture database of setting up in advance; Or
Described gesture comprises the hand shape of palm and the movement locus of palm, and described hand gesture detecting device comprises:
The palm detecting unit is used for described human region image is carried out the palm target detection, and obtains the palm target area image;
The hand-shaped characteristic extraction unit carries out hand-shaped characteristic to described palm target area image and extracts;
Hand shape recognition unit, hand shape sorter according to the hand-shaped characteristic of this palm that extracts and foundation in advance carries out the identification of hand shape, whether the hand shape of judging this palm is effective hand shape, when the hand shape of judging this palm is effective hand shape, this palm is designated current activation palm;
The palm tracking cell is used to detect the movement locus of current activation palm, determines the type of sports of current activation palm;
Described control command determines that device is used for the gesture database of setting up in advance, determines corresponding control command according to the type of sports of this effective hand shape and current activation palm;
Described image display control apparatus switches corresponding video image according to described control command or current video image displayed is operated.
CN 2010106128042010-09-282010-12-29Video image display control method and video image display deviceExpired - Fee RelatedCN102081918B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN 201010612804CN102081918B (en)2010-09-282010-12-29Video image display control method and video image display device

Applications Claiming Priority (3)

Application NumberPriority DateFiling DateTitle
CN201010295067.42010-09-28
CN2010102950672010-09-28
CN 201010612804CN102081918B (en)2010-09-282010-12-29Video image display control method and video image display device

Publications (2)

Publication NumberPublication Date
CN102081918Atrue CN102081918A (en)2011-06-01
CN102081918B CN102081918B (en)2013-02-20

Family

ID=44087844

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN 201010612804Expired - Fee RelatedCN102081918B (en)2010-09-282010-12-29Video image display control method and video image display device

Country Status (1)

CountryLink
CN (1)CN102081918B (en)

Cited By (45)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102426480A (en)*2011-11-032012-04-25康佳集团股份有限公司Human-computer interaction system and real-time gesture tracking processing method thereof
CN102436301A (en)*2011-08-202012-05-02Tcl集团股份有限公司Human-machine interaction method and system based on reference region and time domain information
CN102509088A (en)*2011-11-282012-06-20Tcl集团股份有限公司Hand motion detecting method, hand motion detecting device and human-computer interaction system
CN102509079A (en)*2011-11-042012-06-20康佳集团股份有限公司Real-time gesture tracking method and tracking system
CN102831407A (en)*2012-08-222012-12-19中科宇博(北京)文化有限公司Method for realizing vision identification system of biomimetic mechanical dinosaur
CN102930270A (en)*2012-09-192013-02-13东莞中山大学研究院 Hand recognition method and system based on skin color detection and background removal
CN102981604A (en)*2011-06-072013-03-20索尼公司Image processing apparatus, image processing method, and program
CN103034322A (en)*2011-09-302013-04-10德信互动科技(北京)有限公司Man-machine interaction system and man-machine interaction method
CN103049084A (en)*2012-12-182013-04-17深圳国微技术有限公司Electronic device and method for adjusting display direction according to face direction
CN103092332A (en)*2011-11-082013-05-08苏州中茵泰格科技有限公司Digital image interactive method and system of television
CN103176667A (en)*2013-02-272013-06-26广东工业大学Projection screen touch terminal device based on Android system
CN103246347A (en)*2013-04-022013-08-14百度在线网络技术(北京)有限公司Control method, device and terminal
CN103428551A (en)*2013-08-242013-12-04渭南高新区金石为开咨询有限公司Gesture remote control system
CN103442177A (en)*2013-08-302013-12-11程治永PTZ video camera control system and method based on gesture identification
CN103474010A (en)*2013-09-222013-12-25广州中国科学院软件应用技术研究所Video analysis-based intelligent playing method and device of outdoor advertisement
CN103576990A (en)*2012-07-202014-02-12中国航天科工集团第三研究院第八三五八研究所Optical touch method based on single Gaussian model
CN103853462A (en)*2012-12-052014-06-11现代自动车株式会社System and method for providing user interface using hand shape trace recognition in vehicle
CN103885587A (en)*2014-02-212014-06-25联想(北京)有限公司Information processing method and electronic equipment
CN104050443A (en)*2013-03-132014-09-17英特尔公司 Pose preprocessing of video streams using skin tone detection
CN104683722A (en)*2013-11-262015-06-03精工爱普生株式会社 Image display device and control method therefor
CN104798104A (en)*2012-12-132015-07-22英特尔公司 Gesture preprocessing of video streams using marked regions
CN104809387A (en)*2015-03-122015-07-29山东大学Video image gesture recognition based non-contact unlocking method and device
CN105095882A (en)*2015-08-242015-11-25珠海格力电器股份有限公司Gesture recognition method and device
CN105825193A (en)*2016-03-252016-08-03乐视控股(北京)有限公司Method and device for position location of center of palm, gesture recognition device and intelligent terminals
CN105930811A (en)*2016-04-262016-09-07济南梦田商贸有限责任公司Palm texture feature detection method based on image processing
CN105980963A (en)*2014-01-072016-09-28汤姆逊许可公司System and method for controlling playback of media using gestures
CN106022211A (en)*2016-05-042016-10-12北京航空航天大学 A method for controlling multimedia equipment using gestures
CN106197437A (en)*2016-07-012016-12-07蔡雄A kind of Vehicular guidance system possessing Road Detection function
CN106227230A (en)*2016-07-092016-12-14东莞市华睿电子科技有限公司 A method of controlling an unmanned aerial vehicle
CN106886275A (en)*2015-12-152017-06-23比亚迪股份有限公司The control method of car-mounted terminal, device and vehicle
WO2017129020A1 (en)*2016-01-292017-08-03中兴通讯股份有限公司Human behaviour recognition method and apparatus in video, and computer storage medium
CN107390573A (en)*2017-06-282017-11-24长安大学Intelligent wheelchair system and control method based on gesture control
WO2018113259A1 (en)*2016-12-222018-06-28深圳光启合众科技有限公司Method and device for acquiring target object, and robot
CN108509853A (en)*2018-03-052018-09-07西南民族大学A kind of gesture identification method based on camera visual information
CN108647564A (en)*2018-03-282018-10-12安徽工程大学A kind of gesture recognition system and method based on casement window device
CN111580652A (en)*2020-05-062020-08-25Oppo广东移动通信有限公司Control method and device for video playing, augmented reality equipment and storage medium
CN111726553A (en)*2020-06-162020-09-29上海传英信息技术有限公司 Video recording method, terminal and storage medium
CN112016440A (en)*2020-08-262020-12-01杭州云栖智慧视通科技有限公司Target pushing method based on multi-target tracking
CN113032605A (en)*2019-12-252021-06-25中移(成都)信息通信科技有限公司Information display method, device and equipment and computer storage medium
CN113095292A (en)*2021-05-062021-07-09广州虎牙科技有限公司Gesture recognition method and device, electronic equipment and readable storage medium
CN113221892A (en)*2021-05-122021-08-06佛山育脉科技有限公司Palm image determination method and device and computer readable storage medium
CN113807328A (en)*2021-11-182021-12-17济南和普威视光电技术有限公司Target detection method, device and medium based on algorithm fusion
CN114153308A (en)*2020-09-082022-03-08阿里巴巴集团控股有限公司Gesture control method and device, electronic equipment and computer readable medium
CN116030411A (en)*2022-12-282023-04-28宁波星巡智能科技有限公司Human privacy shielding method, device and equipment based on gesture recognition
US20230252821A1 (en)*2021-01-262023-08-10Boe Technology Group Co., Ltd.Control Method, Electronic Device, and Storage Medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1276572A (en)*1999-06-082000-12-13松下电器产业株式会社Hand shape and gesture identifying device, identifying method and medium for recording program contg. said method
CN1860429A (en)*2003-09-302006-11-08皇家飞利浦电子股份有限公司Gesture to define location, size, and/or content of content window on a display
CN101332362A (en)*2008-08-052008-12-31北京中星微电子有限公司Interactive delight system based on human posture recognition and implement method thereof
CN101605399A (en)*2008-06-132009-12-16英华达(上海)电子有限公司A kind of portable terminal and method that realizes Sign Language Recognition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1276572A (en)*1999-06-082000-12-13松下电器产业株式会社Hand shape and gesture identifying device, identifying method and medium for recording program contg. said method
CN1860429A (en)*2003-09-302006-11-08皇家飞利浦电子股份有限公司Gesture to define location, size, and/or content of content window on a display
CN101605399A (en)*2008-06-132009-12-16英华达(上海)电子有限公司A kind of portable terminal and method that realizes Sign Language Recognition
CN101332362A (en)*2008-08-052008-12-31北京中星微电子有限公司Interactive delight system based on human posture recognition and implement method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《手势驱动编钟演奏技术的研究与系统实现》 20070815 胡文娟 手势驱动编钟演奏技术的研究与系统实现 ,*

Cited By (74)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9916012B2 (en)2011-06-072018-03-13Sony CorporationImage processing apparatus, image processing method, and program
US9785245B2 (en)2011-06-072017-10-10Sony CorporationImage processing apparatus, image processing method, and program for recognizing a gesture
CN102981604A (en)*2011-06-072013-03-20索尼公司Image processing apparatus, image processing method, and program
CN102981604B (en)*2011-06-072016-12-14索尼公司Image processing equipment and image processing method
CN102436301A (en)*2011-08-202012-05-02Tcl集团股份有限公司Human-machine interaction method and system based on reference region and time domain information
CN102436301B (en)*2011-08-202015-04-15Tcl集团股份有限公司Human-machine interaction method and system based on reference region and time domain information
CN103034322A (en)*2011-09-302013-04-10德信互动科技(北京)有限公司Man-machine interaction system and man-machine interaction method
CN102426480A (en)*2011-11-032012-04-25康佳集团股份有限公司Human-computer interaction system and real-time gesture tracking processing method thereof
CN102509079A (en)*2011-11-042012-06-20康佳集团股份有限公司Real-time gesture tracking method and tracking system
CN103092332A (en)*2011-11-082013-05-08苏州中茵泰格科技有限公司Digital image interactive method and system of television
CN102509088B (en)*2011-11-282014-01-08Tcl集团股份有限公司Hand motion detecting method, hand motion detecting device and human-computer interaction system
CN102509088A (en)*2011-11-282012-06-20Tcl集团股份有限公司Hand motion detecting method, hand motion detecting device and human-computer interaction system
CN103576990A (en)*2012-07-202014-02-12中国航天科工集团第三研究院第八三五八研究所Optical touch method based on single Gaussian model
CN102831407A (en)*2012-08-222012-12-19中科宇博(北京)文化有限公司Method for realizing vision identification system of biomimetic mechanical dinosaur
CN102831407B (en)*2012-08-222014-10-29中科宇博(北京)文化有限公司Method for realizing vision identification system of biomimetic mechanical dinosaur
CN102930270A (en)*2012-09-192013-02-13东莞中山大学研究院 Hand recognition method and system based on skin color detection and background removal
CN103853462A (en)*2012-12-052014-06-11现代自动车株式会社System and method for providing user interface using hand shape trace recognition in vehicle
US9720507B2 (en)2012-12-132017-08-01Intel CorporationGesture pre-processing of video stream using a markered region
CN107272883A (en)*2012-12-132017-10-20英特尔公司The gesture of video flowing is pre-processed using marked region
CN104798104A (en)*2012-12-132015-07-22英特尔公司 Gesture preprocessing of video streams using marked regions
US10146322B2 (en)2012-12-132018-12-04Intel CorporationGesture pre-processing of video stream using a markered region
US10261596B2 (en)2012-12-132019-04-16Intel CorporationGesture pre-processing of video stream using a markered region
CN103049084A (en)*2012-12-182013-04-17深圳国微技术有限公司Electronic device and method for adjusting display direction according to face direction
CN103049084B (en)*2012-12-182016-01-27深圳国微技术有限公司A kind of electronic equipment and method thereof that can adjust display direction according to face direction
CN103176667A (en)*2013-02-272013-06-26广东工业大学Projection screen touch terminal device based on Android system
CN104050443A (en)*2013-03-132014-09-17英特尔公司 Pose preprocessing of video streams using skin tone detection
CN104050443B (en)*2013-03-132018-10-12英特尔公司 Pose preprocessing of video streams using skin tone detection
CN103246347A (en)*2013-04-022013-08-14百度在线网络技术(北京)有限公司Control method, device and terminal
CN103428551A (en)*2013-08-242013-12-04渭南高新区金石为开咨询有限公司Gesture remote control system
CN103442177A (en)*2013-08-302013-12-11程治永PTZ video camera control system and method based on gesture identification
CN103474010A (en)*2013-09-222013-12-25广州中国科学院软件应用技术研究所Video analysis-based intelligent playing method and device of outdoor advertisement
CN104683722A (en)*2013-11-262015-06-03精工爱普生株式会社 Image display device and control method therefor
CN104683722B (en)*2013-11-262019-07-12精工爱普生株式会社Image display device and its control method
CN105980963A (en)*2014-01-072016-09-28汤姆逊许可公司System and method for controlling playback of media using gestures
CN103885587A (en)*2014-02-212014-06-25联想(北京)有限公司Information processing method and electronic equipment
CN104809387B (en)*2015-03-122017-08-29山东大学Contactless unlocking method and device based on video image gesture identification
CN104809387A (en)*2015-03-122015-07-29山东大学Video image gesture recognition based non-contact unlocking method and device
CN105095882B (en)*2015-08-242019-03-19珠海格力电器股份有限公司Gesture recognition method and device
CN105095882A (en)*2015-08-242015-11-25珠海格力电器股份有限公司Gesture recognition method and device
CN106886275A (en)*2015-12-152017-06-23比亚迪股份有限公司The control method of car-mounted terminal, device and vehicle
CN106886275B (en)*2015-12-152020-03-20比亚迪股份有限公司Control method and device of vehicle-mounted terminal and vehicle
WO2017129020A1 (en)*2016-01-292017-08-03中兴通讯股份有限公司Human behaviour recognition method and apparatus in video, and computer storage medium
CN105825193A (en)*2016-03-252016-08-03乐视控股(北京)有限公司Method and device for position location of center of palm, gesture recognition device and intelligent terminals
CN105930811A (en)*2016-04-262016-09-07济南梦田商贸有限责任公司Palm texture feature detection method based on image processing
CN106022211A (en)*2016-05-042016-10-12北京航空航天大学 A method for controlling multimedia equipment using gestures
CN106022211B (en)*2016-05-042019-06-28北京航空航天大学Method for controlling multimedia equipment by utilizing gestures
CN106197437A (en)*2016-07-012016-12-07蔡雄A kind of Vehicular guidance system possessing Road Detection function
CN106227230A (en)*2016-07-092016-12-14东莞市华睿电子科技有限公司 A method of controlling an unmanned aerial vehicle
CN108230328B (en)*2016-12-222021-10-22新沂阿凡达智能科技有限公司 Method, device and robot for acquiring target object
KR102293163B1 (en)2016-12-222021-08-23선전 쾅-츠 허종 테크놀로지 엘티디. How to acquire a target, devices and robots
US11127151B2 (en)2016-12-222021-09-21Shen Zhen Kuang-Chi Hezhong Technology LtdMethod and device for acquiring target object, and robot
CN108230328A (en)*2016-12-222018-06-29深圳光启合众科技有限公司Obtain the method, apparatus and robot of target object
KR20190099259A (en)*2016-12-222019-08-26선전 쾅-츠 허종 테크놀로지 엘티디. How to get the target, device and robot
WO2018113259A1 (en)*2016-12-222018-06-28深圳光启合众科技有限公司Method and device for acquiring target object, and robot
CN107390573B (en)*2017-06-282020-05-29长安大学Intelligent wheelchair system based on gesture control and control method
CN107390573A (en)*2017-06-282017-11-24长安大学Intelligent wheelchair system and control method based on gesture control
CN108509853A (en)*2018-03-052018-09-07西南民族大学A kind of gesture identification method based on camera visual information
CN108647564A (en)*2018-03-282018-10-12安徽工程大学A kind of gesture recognition system and method based on casement window device
CN113032605A (en)*2019-12-252021-06-25中移(成都)信息通信科技有限公司Information display method, device and equipment and computer storage medium
CN113032605B (en)*2019-12-252023-08-18中移(成都)信息通信科技有限公司Information display method, device, equipment and computer storage medium
CN111580652A (en)*2020-05-062020-08-25Oppo广东移动通信有限公司Control method and device for video playing, augmented reality equipment and storage medium
CN111726553A (en)*2020-06-162020-09-29上海传英信息技术有限公司 Video recording method, terminal and storage medium
CN112016440A (en)*2020-08-262020-12-01杭州云栖智慧视通科技有限公司Target pushing method based on multi-target tracking
CN112016440B (en)*2020-08-262024-02-20杭州云栖智慧视通科技有限公司Target pushing method based on multi-target tracking
CN114153308A (en)*2020-09-082022-03-08阿里巴巴集团控股有限公司Gesture control method and device, electronic equipment and computer readable medium
CN114153308B (en)*2020-09-082023-11-21阿里巴巴集团控股有限公司Gesture control method, gesture control device, electronic equipment and computer readable medium
US12211314B2 (en)*2021-01-262025-01-28Boe Technology Group Co., Ltd.Control method, electronic device, and storage medium for facial and gesture recognition
US20230252821A1 (en)*2021-01-262023-08-10Boe Technology Group Co., Ltd.Control Method, Electronic Device, and Storage Medium
CN113095292A (en)*2021-05-062021-07-09广州虎牙科技有限公司Gesture recognition method and device, electronic equipment and readable storage medium
CN113221892A (en)*2021-05-122021-08-06佛山育脉科技有限公司Palm image determination method and device and computer readable storage medium
CN113807328B (en)*2021-11-182022-03-18济南和普威视光电技术有限公司Target detection method, device and medium based on algorithm fusion
CN113807328A (en)*2021-11-182021-12-17济南和普威视光电技术有限公司Target detection method, device and medium based on algorithm fusion
CN116030411B (en)*2022-12-282023-08-18宁波星巡智能科技有限公司Human privacy shielding method, device and equipment based on gesture recognition
CN116030411A (en)*2022-12-282023-04-28宁波星巡智能科技有限公司Human privacy shielding method, device and equipment based on gesture recognition

Also Published As

Publication numberPublication date
CN102081918B (en)2013-02-20

Similar Documents

PublicationPublication DateTitle
CN102081918A (en)Video image display control method and video image display device
Hsieh et al.A real time hand gesture recognition system using motion history image
Biswas et al.Gesture recognition using microsoft kinect®
CN103353935B (en)A kind of 3D dynamic gesture identification method for intelligent domestic system
CN102831404B (en)Gesture detecting method and system
US8582037B2 (en)System and method for hand gesture recognition for remote control of an internet protocol TV
CN104484645B (en)A kind of " 1 " gesture identification method and system towards man-machine interaction
CN105160297B (en)Masked man's event automatic detection method based on features of skin colors
CN110084192B (en) Fast dynamic gesture recognition system and method based on target detection
CN101593022A (en) A Fast Human-Computer Interaction Method Based on Fingertip Tracking
CN103150019A (en)Handwriting input system and method
CN108388882A (en)Based on the gesture identification method that the overall situation-part is multi-modal RGB-D
CN102332095A (en)Face motion tracking method, face motion tracking system and method for enhancing reality
CN104167006B (en)Gesture tracking method of any hand shape
CN112101208A (en)Feature series fusion gesture recognition method and device for elderly people
CN110347870A (en)The video frequency abstract generation method of view-based access control model conspicuousness detection and hierarchical clustering method
WO2020253475A1 (en)Intelligent vehicle motion control method and apparatus, device and storage medium
CN110032932B (en)Human body posture identification method based on video processing and decision tree set threshold
CN107808376A (en)A kind of detection method of raising one's hand based on deep learning
CN102830800B (en)Method and system for controlling digital signage by utilizing gesture recognition
CN107886110A (en)Method for detecting human face, device and electronic equipment
CN112199015A (en)Intelligent interaction all-in-one machine and writing method and device thereof
CN106327525A (en)Machine room important place border-crossing behavior real-time monitoring method
CN117133032A (en)Personnel identification and positioning method based on RGB-D image under face shielding condition
CN103049748A (en)Behavior-monitoring method and behavior-monitoring system

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant
ASSSuccession or assignment of patent right

Owner name:SHENZHEN RUIGONG TECHNOLOGY CO., LTD.

Free format text:FORMER OWNER: SHENZHEN GRADUATE SCHOOL OF PEKING UNIVERSITY

Effective date:20150624

C41Transfer of patent application or patent right or utility model
TR01Transfer of patent right

Effective date of registration:20150624

Address after:518000 Guangdong city of Shenzhen province Nanshan District high in the four No. 31 EVOC technology building 17B1

Patentee after:Shenzhen Rui Technology Co., Ltd.

Address before:518055 Guangdong city in Shenzhen Province, Nanshan District City Xili Shenzhen University North Campus

Patentee before:Shenzhen Graduate School of Peking University

CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20130220

Termination date:20171229

CF01Termination of patent right due to non-payment of annual fee

[8]ページ先頭

©2009-2025 Movatter.jp