Movatterモバイル変換


[0]ホーム

URL:


CN107741781A - Flight control method and device of unmanned aerial vehicle, unmanned aerial vehicle and storage medium - Google Patents

Flight control method and device of unmanned aerial vehicle, unmanned aerial vehicle and storage medium
Download PDF

Info

Publication number
CN107741781A
CN107741781ACN201710778269.6ACN201710778269ACN107741781ACN 107741781 ACN107741781 ACN 107741781ACN 201710778269 ACN201710778269 ACN 201710778269ACN 107741781 ACN107741781 ACN 107741781A
Authority
CN
China
Prior art keywords
gesture
control
training
image
flight control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710778269.6A
Other languages
Chinese (zh)
Inventor
周翊民
常津津
吕琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CASfiledCriticalShenzhen Institute of Advanced Technology of CAS
Priority to CN201710778269.6ApriorityCriticalpatent/CN107741781A/en
Publication of CN107741781ApublicationCriticalpatent/CN107741781A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明适用计算机技术领域,提供了一种无人机的飞行控制方法、装置、无人机和存储介质,该方法包括:通过无人机上的摄像头拍摄当前场景图像,通过训练好的深度学习模型检测场景图像中是否存在用户手势,当检测到存在用户手势时,通过训练好的手势识别模型识别用户手势对应的控制手势,在预先构建的手势控制字典中将该对应的控制手势翻译为无人机的飞行控制指令,根据该飞行控制指令对无人机的飞行进行控制,从而通过手势对无人机进行控制且无需对图像进行复杂的预处理,有效地降低了无人机飞行控制的成本,有效地提高了无人机飞行控制的效率和便捷度。

The present invention is applicable to the field of computer technology, and provides a flight control method and device for an unmanned aerial vehicle, an unmanned aerial vehicle and a storage medium. Detect whether there is a user gesture in the scene image. When a user gesture is detected, use the trained gesture recognition model to recognize the control gesture corresponding to the user gesture, and translate the corresponding control gesture into unmanned in the pre-built gesture control dictionary. According to the flight control instructions of the drone, the flight of the drone is controlled, so that the drone can be controlled by gestures without complex preprocessing of the image, effectively reducing the cost of drone flight control , effectively improving the efficiency and convenience of UAV flight control.

Description

Translated fromChinese
无人机的飞行控制方法、装置、无人机及存储介质Flight control method and device of unmanned aerial vehicle, unmanned aerial vehicle and storage medium

技术领域technical field

本发明属于无人机人机交互技术领域,尤其涉及一种无人机的飞行控制方法、装置、无人机及存储介质。The invention belongs to the technical field of human-computer interaction of unmanned aerial vehicle, and in particular relates to a flight control method and device of unmanned aerial vehicle, unmanned aerial vehicle and storage medium.

背景技术Background technique

无人机作为一种新兴的应用平台,因其体积小、机动性强、操作灵活、成本低等特点,在军用、民用领域都得到了广泛应用,尤其在监视和搜救方面的应用场景十分广阔。As an emerging application platform, unmanned aerial vehicles (UAVs) have been widely used in military and civilian fields due to their small size, strong mobility, flexible operation, and low cost, especially in surveillance and search and rescue. .

目前,无人机的飞行控制方式主要为远程控制,由专业训练过的操作者使用专业的遥控器或者地面站来执行控制操作,该方式能够对无人机实现精准、实时的控制,且可靠性强,但是该方式增加了额外的成本且遥控设备不利于携带,还增加了操作员的负担,例如,操作员使用遥控器,需要深入了解无人机的舵机、油门延迟等专业知识,并进行模拟和实际飞行操作训练,在地面站的操作员需要熟悉地面站界面的相关操作,专业性很强,这对于一般的初学者及爱好者有一定的难度。为了降低费用成本,减轻操作员的负担,使操作员与无人机以更自然、简单、直观的方式进行交互,将人类自然的手势应用到无人机的飞行控制的人机交互技术成为无人机的飞行控制研究的热点和难点之一。At present, the flight control method of drones is mainly remote control. Professionally trained operators use professional remote controllers or ground stations to perform control operations. This method can achieve precise, real-time control of drones, and is reliable. strong, but this method adds additional cost and the remote control equipment is not conducive to portability, and also increases the burden on the operator. For example, the operator needs to have a deep understanding of the drone’s steering gear, throttle delay and other professional knowledge when using the remote control. And carry out simulation and actual flight operation training. The operator at the ground station needs to be familiar with the relevant operations of the ground station interface, which is very professional, which is difficult for ordinary beginners and enthusiasts. In order to reduce the cost, reduce the burden on the operator, and enable the operator to interact with the UAV in a more natural, simple and intuitive way, the human-computer interaction technology that applies human natural gestures to the flight control of the UAV has become a wireless technology. It is one of the hotspots and difficulties in the study of man-machine flight control.

目前,手势识别的研究主要包括基于硬件传感器的手势识别、基于机器视觉的手势识别。基于硬件传感器的手势识别主要有两种,一种通过使用者穿戴可接收手势信号的手套实现手势识别,这种手套价格昂贵且给使用者带来不便,局限性较大,另一种利用Kinect传感器采集用户骨骼节点的空间位置信息,通过计算机端的识别软件识别出这些信息对应的动作指令,进而生成对应的无人机的飞行控制指令,但该方式的识别成功率较低。基于机器视觉信息的手势识别直接将以使用者的手为目标的视频帧图像作为输入,通过图像识别实现手势识别,该方式提供了无约束的人机交互方式,且交互自由度高、交互体验逼真,但该方式的识别精度有待提高。At present, the research on gesture recognition mainly includes gesture recognition based on hardware sensors and gesture recognition based on machine vision. There are mainly two types of gesture recognition based on hardware sensors. One is to realize gesture recognition through the user wearing gloves that can receive gesture signals. The sensor collects the spatial position information of the user's skeletal nodes, and the recognition software on the computer side recognizes the action commands corresponding to these information, and then generates the corresponding flight control commands of the UAV, but the recognition success rate of this method is low. Gesture recognition based on machine vision information directly takes the video frame image targeting the user's hand as input, and realizes gesture recognition through image recognition. Realistic, but the recognition accuracy of this method needs to be improved.

在亿航智能设备(广州)有限公司提出的一种飞行器及其控制方法的专利(CN106774947A)中,通过对双目摄像头采集的图像进行深度处理和手势识别,实现对飞行器的控制,该方法解决了传统无人机远程控制方式中硬件成本高的问题,但是增加了手势识别时预处理操作的复杂度,且识别成功率不高。南京邮电大学提出基于视觉的无人机手势交互研究,该方法在不改变现有硬件的基础上,在无人机遥控器与使用者的手之间增加一个“交互媒质”,并在摄像机与遥控器之间,增加了ARM开发板,对摄像头所捕捉的手势进行动作识别,将识别的手势动作转换为指令导入ARM开发板,再转换为具体电压或电阻的变化,从而实现对无人机的控制,该方法始终需要遥控器来控制无人机的飞行,且未解决硬件成本高、遥控设备不便于携带的问题。In the patent (CN106774947A) of an aircraft and its control method proposed by Yihang Intelligent Equipment (Guangzhou) Co., Ltd., the control of the aircraft is realized by performing in-depth processing and gesture recognition on the images collected by the binocular camera. It solves the problem of high hardware cost in the traditional UAV remote control method, but increases the complexity of the preprocessing operation during gesture recognition, and the recognition success rate is not high. Nanjing University of Posts and Telecommunications proposed a vision-based UAV gesture interaction research. On the basis of not changing the existing hardware, this method adds an "interaction medium" between the UAV remote control and the user's hand, and adds an "interaction medium" between the camera and the user's hand. Between the remote controllers, an ARM development board is added to recognize the gestures captured by the camera, convert the recognized gestures into instructions and import them into the ARM development board, and then convert them into specific voltage or resistance changes, so as to realize the control of the drone. This method always requires a remote controller to control the flight of the UAV, and does not solve the problems of high hardware cost and inconvenient portability of remote control equipment.

发明内容Contents of the invention

本发明的目的在于提供一种无人机的飞行控制方法、装置、无人机及存储介质,旨在解决现有技术中无人机的飞行控制的硬件成本高、准确率不高且效率不高的问题。The purpose of the present invention is to provide a flight control method, device, drone and storage medium for unmanned aerial vehicles, aiming to solve the problem of high hardware cost, low accuracy and low efficiency of the flight control of unmanned aerial vehicles in the prior art. high question.

一方面,本发明提供了一种无人机的飞行控制方法,所述方法包括下述步骤:On the one hand, the present invention provides a kind of flight control method of unmanned aerial vehicle, and described method comprises the following steps:

通过无人机上预设的摄像头拍摄当前场景图像,通过训练好的深度学习模型检测所述场景图像中是否存在用户手势;Capture the current scene image through the preset camera on the drone, and detect whether there is a user gesture in the scene image through the trained deep learning model;

当检测到所述场景图像中的用户手势时,通过训练好的手势识别模型对所述用户手势进行识别,以确定所述用户手势在预先构建的控制手势库中对应的控制手势;When a user gesture in the scene image is detected, the user gesture is recognized by a trained gesture recognition model to determine a control gesture corresponding to the user gesture in a pre-built control gesture library;

在预先构建的手势指令字典中将所述对应的控制手势翻译为所述无人机的飞行控制指令,根据所述飞行控制指令对所述无人机的飞行进行控制。The corresponding control gesture is translated into a flight control instruction of the UAV in a pre-built gesture instruction dictionary, and the flight of the UAV is controlled according to the flight control instruction.

另一方面,本发明提供了一种无人机的飞行控制装置,所述装置包括:In another aspect, the present invention provides a flight control device for an unmanned aerial vehicle, the device comprising:

手势检测单元,用于通过无人机上预设的摄像头拍摄当前场景图像,通过训练好的深度学习模型检测所述场景图像中是否存在用户手势;The gesture detection unit is used to take the current scene image through the preset camera on the drone, and detect whether there is a user gesture in the scene image through the trained deep learning model;

手势识别单元,用于当检测到所述场景图像中的用户手势时,通过训练好的手势识别模型对所述用户手势进行识别,以确定所述用户手势在预先构建的控制手势库中对应的控制手势;以及The gesture recognition unit is configured to recognize the user gesture through the trained gesture recognition model when detecting the user gesture in the scene image, so as to determine the corresponding position of the user gesture in the pre-built control gesture library. control gestures; and

飞行控制单元,用于在预先构建的手势指令字典中将所述对应的控制手势翻译为所述无人机的飞行控制指令,根据所述飞行控制指令对所述无人机的飞行进行控制。The flight control unit is configured to translate the corresponding control gestures into the flight control instructions of the UAV in the pre-built gesture instruction dictionary, and control the flight of the UAV according to the flight control instructions.

另一方面,本发明还提供了一种无人机,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述无人机的飞行控制方法所述的步骤。On the other hand, the present invention also provides an unmanned aerial vehicle, including a memory, a processor, and a computer program stored in the memory and operable on the processor. When the processor executes the computer program, Realize the steps described in the flight control method of the above-mentioned unmanned aerial vehicle.

另一方面,本发明还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述无人机的飞行控制方法所述的步骤。On the other hand, the present invention also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the above-mentioned flight control method for a UAV is implemented. A step of.

本发明通过无人机上预设的摄像头拍摄当前场景图像,通过训练好的深度学习模型检测场景图像中是否存在用户手势,当检测到存在时,通过训练好的手势识别模块识别用户手势对应的控制手势,在预先构建的手势指令字典中将对应的控制手势翻译为无人机的飞行控制指令,根据飞行控制指令对无人机的飞行进行控制,从而实现无人机飞行的手势控制,且无需对摄像头获取的图像进行复杂的预处理,有效地提高了无人机飞行时手势识别的效率和成功率,有效地降低了无人机飞行控制的硬件成本,使得无人机飞行控制更为方便简单。The present invention captures the current scene image through the preset camera on the UAV, detects whether there is a user gesture in the scene image through the trained deep learning model, and when it is detected, recognizes the corresponding control of the user gesture through the trained gesture recognition module Gestures, in the pre-built gesture instruction dictionary, translate the corresponding control gestures into the flight control instructions of the UAV, and control the flight of the UAV according to the flight control instructions, so as to realize the gesture control of UAV flight without Complicated preprocessing of images captured by the camera effectively improves the efficiency and success rate of gesture recognition during UAV flight, effectively reduces the hardware cost of UAV flight control, and makes UAV flight control more convenient Simple.

附图说明Description of drawings

图1是本发明实施例一提供的无人机的飞行控制方法的实现流程图;Fig. 1 is the implementation flowchart of the flight control method of the unmanned aerial vehicle provided by Embodiment 1 of the present invention;

图2是本发明实施例二提供的无人机的飞行控制装置的结构示意图;Fig. 2 is a schematic structural diagram of the flight control device of the unmanned aerial vehicle provided by Embodiment 2 of the present invention;

图3是本发明实施例二提供的无人机的飞行控制装置的优选结构示意图;以及Fig. 3 is a schematic diagram of the preferred structure of the flight control device of the unmanned aerial vehicle provided by Embodiment 2 of the present invention; and

图4是本发明实施例三提供的无人机的结构示意图。Fig. 4 is a schematic structural diagram of the drone provided by Embodiment 3 of the present invention.

具体实施方式detailed description

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

以下结合具体实施例对本发明的具体实现进行详细描述:The specific realization of the present invention is described in detail below in conjunction with specific embodiment:

实施例一:Embodiment one:

图1示出了本发明实施例一提供的无人机的飞行控制方法的实现流程,为了便于说明,仅示出了与本发明实施例相关的部分,详述如下:Figure 1 shows the implementation process of the flight control method of the unmanned aerial vehicle provided by the first embodiment of the present invention. For the convenience of description, only the parts related to the embodiment of the present invention are shown, and the details are as follows:

在步骤S101中,通过无人机上预设的摄像头拍摄当前场景图像,通过训练好的深度学习模型检测场景图像中是否存在用户手势。In step S101, the current scene image is captured by the preset camera on the drone, and whether there is a user gesture in the scene image is detected by the trained deep learning model.

在本发明实施例中,将无人机上摄像头拍摄到的场景图像输入预先训练好的深度学习模型中,以通过该深度学习模型提取场景图像的图像特征、并根据图像特征确定场景图像中是否存在(或出现)用户手势,从而直接通过深度学习对场景图像进行特征提取,不需进行复杂的图像处理,例如图像分割,有效地提高了手势识别的效率和识别的准确率。In the embodiment of the present invention, the scene image captured by the camera on the drone is input into the pre-trained deep learning model, so as to extract the image features of the scene image through the deep learning model, and determine whether there is (or appear) user gestures, so as to directly extract features from scene images through deep learning, without complex image processing, such as image segmentation, effectively improving the efficiency and accuracy of gesture recognition.

在步骤S102中,当检测到场景图像中的用户手势时,通过训练好的手势识别模型对用户手势进行识别,以确定用户手势在预先构建的控制手势库中对应的控制手势。In step S102, when a user gesture in the scene image is detected, the user gesture is recognized by the trained gesture recognition model to determine the corresponding control gesture of the user gesture in the pre-built control gesture library.

在本发明实施例中,当检测到场景图像中存在(或出现)用户手势时,通过训练好的手势识别模型对用户手势进行识别,以确定用户手势在预先构建的控制手势库中对应的控制手势,训练好的手势识别模型即用于对用户手势进行分类的分类器,控制手势库中包括多种控制手势的特征,例如“右手竖起大拇指”的手势特征、“右手握拳”的手势特征。In the embodiment of the present invention, when it is detected that there is (or appears) a user gesture in the scene image, the trained gesture recognition model is used to recognize the user gesture, so as to determine the corresponding control of the user gesture in the pre-built control gesture library. Gestures, the trained gesture recognition model is a classifier used to classify user gestures. The control gesture library includes a variety of control gesture features, such as the gesture features of "right thumbs up" and "right fist" gestures feature.

优选地,摄像头通过云台设置在无人机上,由云台带动摄像头旋转,当摄像头捕捉到出现用户手势的场景图像时,根据用户手势在场景图像中的位置变化,由云台调整摄像头的拍摄角度,使得摄像头始终面向用户手势,从而使得摄像头实时拍摄到用户手势,便于对无人机进行手势控制。Preferably, the camera is set on the UAV through the pan-tilt, and the camera is driven by the pan-tilt to rotate. When the camera captures a scene image in which a user gesture occurs, the pan-tilt adjusts the shooting of the camera according to the position change of the user gesture in the scene image. The angle makes the camera always face the user's gesture, so that the camera can capture the user's gesture in real time, which is convenient for gesture control of the drone.

在步骤S103中,在预先构建的手势指令字典中将对应的控制手势翻译为无人机的飞行控制指令,根据飞行控制指令对无人机的飞行进行控制。In step S103, the corresponding control gesture is translated into a flight control instruction of the UAV in the pre-built gesture instruction dictionary, and the flight of the UAV is controlled according to the flight control instruction.

在本发明实施例中,手势指令字典中存储着控制手势库中控制手势与预设的飞行控制指令间的对应关系,例如,当控制手势为“右手握拳”时,对应的飞行控制指令为“起飞”,当控制手势为“左手握拳”时,对应的飞行控制指令为“着陆”,当控制手势为“右手竖起大拇指”时,对应的飞行控制指令为“向右飞行”,当控制手势为“左手竖起大拇指”时,对应的飞行控制指令为“向左飞行”。这些对应关系可由系统默认设定,也可由用户进行修改。In the embodiment of the present invention, the gesture instruction dictionary stores the correspondence between the control gestures in the control gesture library and the preset flight control instructions. For example, when the control gesture is "right fist", the corresponding flight control instruction is " Take off", when the control gesture is "left fist", the corresponding flight control command is "landing", when the control gesture is "right thumbs up", the corresponding flight control command is "fly right", when the control When the gesture is "Left thumbs up", the corresponding flight control command is "Fly left". These correspondences can be set by default by the system, and can also be modified by the user.

在本发明实施例中,获取用户手势对应的控制手势在手势指令字典中对应的飞行控制指令,并将飞行控制指令发送给无人机的飞行控制系统,以实现对无人机的飞行控制。In the embodiment of the present invention, the flight control instruction corresponding to the control gesture corresponding to the user gesture in the gesture instruction dictionary is obtained, and the flight control instruction is sent to the flight control system of the UAV, so as to realize the flight control of the UAV.

优选地,深度学习模型、手势识别模型的训练可通过以下步骤实现:Preferably, the training of deep learning model and gesture recognition model can be realized through the following steps:

(1)通过无人机上的摄像头拍摄用于训练的手势图像,通过深度学习模型对用于训练的手势图像进行特征提取,以训练深度学习模型。(1) The gesture image used for training is taken by the camera on the drone, and the feature extraction is performed on the gesture image used for training through the deep learning model to train the deep learning model.

在本发明实施例中,先通过无人机上的摄像头获取用于训练的手势图像,即指包括手势内容的图像,通过预设的卷积神经网络与限制玻尔兹曼机联合网络(即卷积神经网络与限制玻尔兹曼机组成的联合网络),对这些用于训练的手势图像进行无监督特征提取和有监督特征提取,其中,先通过预设数量个限制波尔兹曼机形成的堆积式网络对用于训练的手势图像进行无监督特征的提取,再通过卷积神经网络对用于训练的手势图像进行有监督特征的提取,从而直接对图像进行处理,获得能够用于手势识别模型训练的两类特征,避免了对图像进行分割、手势提取等复杂的预处理过程。In the embodiment of the present invention, the gesture image used for training is first obtained through the camera on the drone, that is, the image including the gesture content, and the preset convolutional neural network and the restricted Boltzmann machine joint network (ie volume A joint network composed of a product neural network and a restricted Boltzmann machine), and unsupervised feature extraction and supervised feature extraction are performed on these gesture images used for training. The stacked network extracts the unsupervised features of the gesture images used for training, and then extracts the supervised features of the gesture images used for training through the convolutional neural network, so as to directly process the images and obtain gestures that can be used for gestures. The recognition model trains two types of features, avoiding complex preprocessing processes such as image segmentation and gesture extraction.

(2)通过预设的分类器对提取的特征进行分类训练,将训练得到的分类器设置为手势识别模型,并生成控制手势库。(2) Perform classification training on the extracted features through a preset classifier, set the trained classifier as a gesture recognition model, and generate a control gesture library.

在本发明实施例中,可将提取的无监督特征和有监督特征进行融合,通过预设的分类器对融合后的这两类特征进行分类,即对用于训练的手势图像进行分类,从而对分类器进行了训练,得到训练好的分类器,即训练好的手势识别模型,同时由分类后的特征构成控制手势库。优选地,分类器为Softmax分类器,从而实现多种类别的清楚分类。In the embodiment of the present invention, the extracted unsupervised features and supervised features can be fused, and the two types of features after fusion can be classified by a preset classifier, that is, the gesture images used for training are classified, so that The classifier is trained to obtain a trained classifier, that is, a trained gesture recognition model, and a control gesture library is formed from the classified features. Preferably, the classifier is a Softmax classifier, enabling clear classification of multiple classes.

(3)根据控制手势库中的控制手势与飞行控制指令的对应关系,构建手势指令字典。(3) According to the corresponding relationship between control gestures and flight control commands in the control gesture library, a gesture command dictionary is constructed.

在本发明实施例中,通过无人机上预设的摄像头拍摄当前场景图像,通过训练好的深度学习模型检测场景图像中是否存在用户手势,当存在时,通过训练好的手势识别模型对用户手势进行识别,以确定用户手势在预先构建的控制手势库中对应的控制手势,在预先构建的手势指令字典中将对应的控制手势翻译为无人机的飞行控制指令,以通过飞行控制指令控制无人机的飞行,从而不仅实现了无人机飞行的手势控制,且无需对摄像头获取的图像进行复杂的预处理,有效地提高了无人机飞行时手势识别的效率和成功率,有效地降低了无人机飞行控制地硬件成本,使得无人机的飞行控制更为方便简单。In the embodiment of the present invention, the current scene image is captured by the preset camera on the UAV, and whether there is a user gesture in the scene image is detected by the trained deep learning model. Recognition is performed to determine the corresponding control gestures of user gestures in the pre-built control gesture library, and the corresponding control gestures are translated into flight control commands of the drone in the pre-built gesture command dictionary, so as to control drones through flight control commands. The flight of man-machine not only realizes the gesture control of UAV flight, but also does not need to perform complex preprocessing on the images acquired by the camera, which effectively improves the efficiency and success rate of gesture recognition during UAV flight, and effectively reduces the It reduces the hardware cost of UAV flight control, making the flight control of UAV more convenient and simple.

实施例二:Embodiment two:

图2示出了本发明实施例二提供的无人机的飞行控制装置的结构,为了便于说明,仅示出了与本发明实施例相关的部分,其中包括:Fig. 2 shows the structure of the flight control device of the UAV provided by the second embodiment of the present invention. For the convenience of description, only the parts related to the embodiment of the present invention are shown, including:

手势检测单元21,用于通过无人机上预设的摄像头拍摄当前场景图像,通过训练好的深度学习模型检测场景图像中是否存在用户手势。Gesture detection unit 21 is used to capture the current scene image through the preset camera on the drone, and detect whether there is a user gesture in the scene image through the trained deep learning model.

在本发明实施例中,将无人机上摄像头拍摄到的场景图像输入预先训练好的深度学习模型中,以通过该深度学习模型提取场景图像的图像特征、并根据图像特征确定场景图像中是否存在(或出现)用户手势,从而直接通过深度学习对场景图像进行特征提取,不需进行复杂的图像处理,例如图像分割,有效地提高了手势识别的效率和识别的准确率。In the embodiment of the present invention, the scene image captured by the camera on the drone is input into the pre-trained deep learning model, so as to extract the image features of the scene image through the deep learning model, and determine whether there is (or appear) user gestures, so as to directly extract features from scene images through deep learning, without complex image processing, such as image segmentation, effectively improving the efficiency and accuracy of gesture recognition.

手势识别单元22,用于当检测到场景图像中的用户手势时,通过训练好的手势识别模型对用户手势进行识别,以确定用户手势在预先构建的控制手势库中对应的控制手势。The gesture recognition unit 22 is configured to recognize the user gesture through the trained gesture recognition model when detecting the user gesture in the scene image, so as to determine the corresponding control gesture of the user gesture in the pre-built control gesture library.

在本发明实施例中,当检测到场景图像中存在(或出现)用户手势时,通过训练好的手势识别模型对用户手势进行识别,以确定用户手势在预先构建的控制手势库中对应的控制手势,训练好的手势识别模型即用于对用户手势进行分类的分类器,控制手势库中包括多种控制手势的特征,例如“右手竖起大拇指”的手势特征、“右手握拳”的手势特征。In the embodiment of the present invention, when it is detected that there is (or appears) a user gesture in the scene image, the trained gesture recognition model is used to recognize the user gesture, so as to determine the corresponding control of the user gesture in the pre-built control gesture library. Gestures, the trained gesture recognition model is a classifier used to classify user gestures. The control gesture library includes a variety of control gesture features, such as the gesture features of "right thumbs up" and "right fist" gestures feature.

飞行控制单元23,用于在预先构建的手势指令字典中将对应的控制手势翻译为无人机的飞行控制指令,根据飞行控制指令对无人机的飞行进行控制。The flight control unit 23 is configured to translate the corresponding control gestures into the flight control instructions of the UAV in the pre-built gesture instruction dictionary, and control the flight of the UAV according to the flight control instructions.

在本发明实施例中,手势指令字典中存储着控制手势库中控制手势与预设的飞行控制指令间的对应关系,例如,当控制手势为“右手握拳”时,对应的飞行控制指令为“起飞”,当控制手势为“左手握拳”时,对应的飞行控制指令为“着陆”,当控制手势为“右手竖起大拇指”时,对应的飞行控制指令为“向右飞行”,当控制手势为“左手竖起大拇指”时,对应的飞行控制指令为“向左飞行”。这些对应关系可由系统默认设定,也可由用户进行修改。In the embodiment of the present invention, the gesture instruction dictionary stores the correspondence between the control gestures in the control gesture library and the preset flight control instructions. For example, when the control gesture is "right fist", the corresponding flight control instruction is " Take off", when the control gesture is "left fist", the corresponding flight control command is "landing", when the control gesture is "right thumbs up", the corresponding flight control command is "fly right", when the control When the gesture is "Left thumbs up", the corresponding flight control command is "Fly left". These correspondences can be set by default by the system, and can also be modified by the user.

在本发明实施例中,获取用户手势对应的控制手势在手势指令字典中对应的飞行控制指令,并将飞行控制指令发送给无人机的飞行控制系统,以实现对无人机的飞行控制。In the embodiment of the present invention, the flight control instruction corresponding to the control gesture corresponding to the user gesture in the gesture instruction dictionary is obtained, and the flight control instruction is sent to the flight control system of the UAV, so as to realize the flight control of the UAV.

优选地,如图3所示,无人机的飞行控制装置还包括:Preferably, as shown in Figure 3, the flight control device of the UAV also includes:

特征提取单元31,用于通过无人机上的摄像头拍摄用于训练的手势图像,通过深度学习模型对用于训练的手势图像进行特征提取,以训练深度学习模型。The feature extraction unit 31 is used to take gesture images for training through the camera on the drone, and perform feature extraction on the gesture images for training through the deep learning model to train the deep learning model.

在本发明实施例中,先通过无人机上的摄像头获取用于训练的手势图像,即指包括手势内容的图像,通过预设的卷积神经网络与限制玻尔兹曼机联合网络(即卷积神经网络与限制玻尔兹曼机组成的联合网络),对这些用于训练的手势图像进行无监督特征提取和有监督特征提取,其中,先通过预设数量个限制波尔兹曼机形成的堆积式网络对用于训练的手势图像进行无监督特征的提取,再通过卷积神经网络对用于训练的手势图像进行有监督特征的提取,从而直接对图像进行处理,获得能够用于手势识别模型训练的两类特征,避免了对图像进行分割、手势提取等复杂的预处理过程。In the embodiment of the present invention, the gesture image used for training is first obtained through the camera on the drone, that is, the image including the gesture content, and the preset convolutional neural network and the restricted Boltzmann machine joint network (ie volume A joint network composed of a product neural network and a restricted Boltzmann machine), and unsupervised feature extraction and supervised feature extraction are performed on these gesture images used for training. The stacked network extracts the unsupervised features of the gesture images used for training, and then extracts the supervised features of the gesture images used for training through the convolutional neural network, so as to directly process the images and obtain gestures that can be used for gestures. The recognition model trains two types of features, avoiding complex preprocessing processes such as image segmentation and gesture extraction.

模型训练单元32,用于通过预设的分类器对提取的特征进行分类训练,将训练得到的分类器设置为手势识别模型,并生成控制手势库。The model training unit 32 is configured to perform classification training on the extracted features through a preset classifier, set the trained classifier as a gesture recognition model, and generate a control gesture library.

在本发明实施例中,可将提取的无监督特征和有监督特征进行融合,通过预设的分类器对融合后的这两类特征进行分类,即对用于训练的手势图像进行分类,从而对分类器进行了训练,得到训练好的分类器,即训练好的手势识别模型,同时由分类后的特征构成了控制手势库。优选地,分类器为Softmax分类器,从而实现多种类别的清楚分类。In the embodiment of the present invention, the extracted unsupervised features and supervised features can be fused, and the two types of features after fusion can be classified by a preset classifier, that is, the gesture images used for training are classified, so that The classifier is trained to obtain a trained classifier, that is, a trained gesture recognition model, and a control gesture library is formed from the classified features. Preferably, the classifier is a Softmax classifier, enabling clear classification of multiple classes.

字典构建单元33,用于根据控制手势库中的控制手势与飞行控制指令的对应关系,构建手势指令字典。The dictionary construction unit 33 is configured to construct a dictionary of gesture instructions according to the corresponding relationship between control gestures and flight control instructions in the control gesture library.

优选地,特征提取模块31包括:Preferably, the feature extraction module 31 includes:

图像特征提取单元311,通过预设的卷积神经网络与限制玻尔兹曼机联合网络,对用于训练的手势图像进行无监督特征提取和有监督特征提取The image feature extraction unit 311 performs unsupervised feature extraction and supervised feature extraction on gesture images used for training through the preset convolutional neural network and restricted Boltzmann machine joint network

优选地,模型训练单元32包括:Preferably, the model training unit 32 includes:

图像特征分类单元321,用于通过分类器对无监督特征提取到的图像特征和有监督特征提取到的图像特征进行分类训练。The image feature classification unit 321 is configured to perform classification training on image features extracted from unsupervised features and image features extracted from supervised features through a classifier.

优选地,无人机的飞行控制装置还包括摄像头调整单元34,其中:Preferably, the flight control device of the drone also includes a camera adjustment unit 34, wherein:

摄像头调整单元34,用于根据用户手势在场景图像中的位置变化,调整摄像头的拍摄角度,以控制摄像头始终面向用户手势。The camera adjustment unit 34 is configured to adjust the shooting angle of the camera according to the position change of the user's gesture in the scene image, so as to control the camera to always face the user's gesture.

在本发明实施例中,摄像头可通过云台设置在无人机上,由云台带动摄像头旋转,当摄像头捕捉到出现用户手势的场景图像时,可根据用户手势在场景图像中的位置变化,由云台调整摄像头的拍摄角度,使得摄像头始终面向用户手势,从而使得摄像头实时拍摄到用户手势,便于对无人机进行手势控制。In the embodiment of the present invention, the camera can be set on the UAV through the pan-tilt, and the pan-tilt drives the camera to rotate. When the camera captures the scene image where the user gesture occurs, it can be changed according to the position of the user gesture in the scene image. The gimbal adjusts the shooting angle of the camera so that the camera is always facing the user's gesture, so that the camera can capture the user's gesture in real time, which is convenient for gesture control of the drone.

在本发明实施例中,通过无人机上预设的摄像头拍摄当前场景图像,通过训练好的深度学习模型检测场景图像中是否存在用户手势,当存在时,通过训练好的手势识别模型对用户手势进行识别,以确定用户手势在预先构建的控制手势库中对应的控制手势,在预先构建的手势指令字典中将对应的控制手势翻译为无人机的飞行控制指令,以通过飞行控制指令控制无人机的飞行,从而不仅实现了无人机飞行的手势控制,且无需对摄像头获取的图像进行复杂的预处理,有效地提高了无人机飞行时手势识别的效率和成功率,有效地降低了无人机飞行控制地硬件成本,使得无人机的飞行控制更为方便简单。In the embodiment of the present invention, the current scene image is captured by the preset camera on the UAV, and whether there is a user gesture in the scene image is detected by the trained deep learning model. Recognition is performed to determine the corresponding control gestures of user gestures in the pre-built control gesture library, and the corresponding control gestures are translated into flight control commands of the drone in the pre-built gesture command dictionary, so as to control drones through flight control commands. The flight of man-machine not only realizes the gesture control of UAV flight, but also does not need to perform complex preprocessing on the images acquired by the camera, which effectively improves the efficiency and success rate of gesture recognition during UAV flight, and effectively reduces the It reduces the hardware cost of UAV flight control, making the flight control of UAV more convenient and simple.

在本发明实施例中,无人机的飞行控制装置的各单元可由相应的硬件或软件单元实现,各单元可以为独立的软、硬件单元,也可以集成为一个软、硬件单元,在此不用以限制本发明。In the embodiment of the present invention, each unit of the flight control device of the UAV can be realized by corresponding hardware or software units, and each unit can be an independent software and hardware unit, or can be integrated into a software and hardware unit. to limit the invention.

实施例三:Embodiment three:

图4示出了本发明实施例三提供的无人机的结构,为了便于说明,仅示出了与本发明实施例相关的部分。Fig. 4 shows the structure of the drone provided by the third embodiment of the present invention, and for the convenience of description, only the parts related to the embodiment of the present invention are shown.

本发明实施例的无人机4包括处理器40、存储器41以及存储在存储器41中并可在处理器40上运行的计算机程序42。该处理器40执行计算机程序42时实现上述方法实施例中的步骤,例如图1所示的步骤S101至S103。或者,处理器40执行计算机程序42时实现上述装置实施例中各单元的功能,例如图2所示单元21至23的功能。The drone 4 in the embodiment of the present invention includes a processor 40 , a memory 41 and a computer program 42 stored in the memory 41 and operable on the processor 40 . When the processor 40 executes the computer program 42, it implements the steps in the above method embodiments, such as steps S101 to S103 shown in FIG. 1 . Alternatively, when the processor 40 executes the computer program 42, the functions of the units in the above device embodiments are realized, for example, the functions of the units 21 to 23 shown in FIG. 2 .

在本发明实施例中,通过无人机上预设的摄像头拍摄当前场景图像,通过训练好的深度学习模型检测场景图像中是否存在用户手势,当检测到存在时,通过训练好的手势识别模块识别用户手势对应的控制手势,在预先构建的手势指令字典中将对应的控制手势翻译为无人机的飞行控制指令,根据该飞行控制指令对无人机的飞行进行控制,从而实现无人机飞行的手势控制,且无需对摄像头获取的图像进行复杂的预处理,有效地提高了无人机飞行时手势识别的效率和成功率,有效地降低了无人机飞行控制的硬件成本,使得无人机飞行控制更为方便简单。In the embodiment of the present invention, the current scene image is taken by the preset camera on the drone, and whether there is a user gesture in the scene image is detected through the trained deep learning model. When it is detected, it is recognized by the trained gesture recognition module. The control gesture corresponding to the user gesture, translate the corresponding control gesture into the flight control command of the drone in the pre-built gesture command dictionary, and control the flight of the drone according to the flight control command, so as to realize the flight of the drone Gesture control without complex preprocessing of images captured by the camera, effectively improving the efficiency and success rate of gesture recognition during UAV flight, effectively reducing the hardware cost of UAV flight control, making unmanned The flight control of the aircraft is more convenient and simple.

实施例四:Embodiment four:

在本发明实施例中,提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述方法实施例中的步骤,例如,图1所示的步骤S101至S103。或者,该计算机程序被处理器执行时实现上述装置实施例中各单元的功能,例如图2所示单元21至23的功能。In an embodiment of the present invention, a computer-readable storage medium is provided, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments are implemented, for example, as shown in FIG. 1 Steps S101 to S103 shown. Alternatively, when the computer program is executed by the processor, the functions of the units in the above device embodiments are implemented, for example, the functions of the units 21 to 23 shown in FIG. 2 .

在本发明实施例中,通过无人机上预设的摄像头拍摄当前场景图像,通过训练好的深度学习模型检测场景图像中是否存在用户手势,当检测到存在时,通过训练好的手势识别模块识别用户手势对应的控制手势,在预先构建的手势指令字典中将对应的控制手势翻译为无人机的飞行控制指令,根据该飞行控制指令对无人机的飞行进行控制,从而实现无人机飞行的手势控制,且无需对摄像头获取的图像进行复杂的预处理,有效地提高了无人机飞行时手势识别的效率和成功率,有效地降低了无人机飞行控制的硬件成本,使得无人机飞行控制更为方便简单。In the embodiment of the present invention, the current scene image is taken by the preset camera on the drone, and whether there is a user gesture in the scene image is detected through the trained deep learning model. When it is detected, it is recognized by the trained gesture recognition module. The control gesture corresponding to the user gesture, translate the corresponding control gesture into the flight control command of the drone in the pre-built gesture command dictionary, and control the flight of the drone according to the flight control command, so as to realize the flight of the drone Gesture control without complex preprocessing of images captured by the camera, effectively improving the efficiency and success rate of gesture recognition during UAV flight, effectively reducing the hardware cost of UAV flight control, making unmanned The flight control of the aircraft is more convenient and simple.

本发明实施例的计算机可读存储介质可以包括能够携带计算机程序代码的任何实体或装置、记录介质,例如,ROM/RAM、磁盘、光盘、闪存等存储器。The computer-readable storage medium in the embodiments of the present invention may include any entity or device or recording medium capable of carrying computer program codes, such as ROM/RAM, magnetic disk, optical disk, flash memory and other memories.

以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention should be included in the protection of the present invention. within range.

Claims (10)

Translated fromChinese
1.一种无人机的飞行控制方法,其特征在于,所述方法包括下述步骤:1. a flight control method of unmanned aerial vehicle, it is characterized in that, described method comprises the steps:通过无人机上预设的摄像头拍摄当前场景图像,通过训练好的深度学习模型检测所述场景图像中是否存在用户手势;Capture the current scene image through the preset camera on the drone, and detect whether there is a user gesture in the scene image through the trained deep learning model;当检测到所述场景图像中的用户手势时,通过训练好的手势识别模型对所述用户手势进行识别,以确定所述用户手势在预先构建的控制手势库中对应的控制手势;When a user gesture in the scene image is detected, the user gesture is recognized by a trained gesture recognition model to determine a control gesture corresponding to the user gesture in a pre-built control gesture library;在预先构建的手势指令字典中将所述对应的控制手势翻译为所述无人机的飞行控制指令,根据所述飞行控制指令对所述无人机的飞行进行控制。The corresponding control gesture is translated into a flight control instruction of the UAV in a pre-built gesture instruction dictionary, and the flight of the UAV is controlled according to the flight control instruction.2.如权利要求1所述的方法,其特征在于,通过无人机上预设的摄像头拍摄当前场景图像的步骤之前,所述方法还包括:2. The method according to claim 1, characterized in that, before the step of taking a picture of the current scene with a preset camera on the drone, the method also includes:通过所述无人机上的所述摄像头拍摄用于训练的手势图像,通过所述深度学习模型对所述用于训练的手势图像进行特征提取,以训练所述深度学习模型;The gesture image used for training is taken by the camera on the drone, and the feature extraction is performed on the gesture image used for training through the deep learning model to train the deep learning model;通过预设的分类器对所述提取的特征进行分类训练,将训练得到的所述分类器设置为所述手势识别模型,并生成所述控制手势库;performing classification training on the extracted features through a preset classifier, setting the trained classifier as the gesture recognition model, and generating the control gesture library;根据所述控制手势库中的控制手势与所述飞行控制指令的对应关系,构建所述手势指令字典。The gesture instruction dictionary is constructed according to the correspondence between the control gestures in the control gesture library and the flight control instructions.3.如权利要求2所述的方法,其特征在于,通过所述深度学习模型对所述用于训练的手势图像进行特征提取的步骤,包括:3. The method according to claim 2, wherein the step of performing feature extraction on the gesture image for training by the deep learning model comprises:通过预设的卷积神经网络与限制玻尔兹曼机联合网络,对所述用于训练的手势图像进行无监督特征提取和有监督特征提取;Performing unsupervised feature extraction and supervised feature extraction on the gesture images used for training through a preset convolutional neural network and restricted Boltzmann machine joint network;通过预设的分类器对所述提取的特征进行分类训练的步骤,包括:The step of classifying and training the extracted features through a preset classifier includes:通过所述分类器对所述无监督特征提取到的图像特征和所述有监督特征提取到的图像特征进行分类训练。Classification training is performed on the image features extracted from the unsupervised features and the image features extracted from the supervised features by the classifier.4.如权利要求1所述的方法,其特征在于,当检测到所述场景图像中的用户手势时,所述方法还包括:4. The method according to claim 1, wherein when a user gesture in the scene image is detected, the method further comprises:根据所述用户手势在所述场景图像中的位置变化,调整所述摄像头的拍摄角度,以控制所述摄像头始终面向所述用户手势。According to the position change of the user gesture in the scene image, adjust the shooting angle of the camera, so as to control the camera to always face the user gesture.5.一种无人机的飞行控制装置,其特征在于,所述装置包括:5. A flight control device of unmanned aerial vehicle, it is characterized in that, described device comprises:手势检测单元,用于通过无人机上预设的摄像头拍摄当前场景图像,通过训练好的深度学习模型检测所述场景图像中是否存在用户手势;The gesture detection unit is used to take the current scene image through the preset camera on the drone, and detect whether there is a user gesture in the scene image through the trained deep learning model;手势识别单元,用于当检测到所述场景图像中的用户手势时,通过训练好的手势识别模型对所述用户手势进行识别,以确定所述用户手势在预先构建的控制手势库中对应的控制手势;以及A gesture recognition unit, configured to recognize the user gesture through the trained gesture recognition model when detecting the user gesture in the scene image, so as to determine the corresponding position of the user gesture in the pre-built control gesture library control gestures; and飞行控制单元,用于在预先构建的手势指令字典中将所述对应的控制手势翻译为所述无人机的飞行控制指令,根据所述飞行控制指令对所述无人机的飞行进行控制。The flight control unit is configured to translate the corresponding control gestures into the flight control instructions of the UAV in the pre-built gesture instruction dictionary, and control the flight of the UAV according to the flight control instructions.6.如权利要求5所述的装置,其特征在于,所述装置还包括:6. The device of claim 5, further comprising:特征提取单元,用于通过所述无人机上的所述摄像头拍摄用于训练的手势图像,通过所述深度学习模型对所述用于训练的手势图像进行特征提取,以训练所述深度学习模型;A feature extraction unit, configured to take gesture images for training through the camera on the drone, and perform feature extraction on the gesture images for training through the deep learning model to train the deep learning model ;模型训练单元,用于通过预设的分类器对所述提取的特征进行分类训练,将训练得到的所述分类器设置为所述手势识别模型,并生成所述控制手势库;以及A model training unit, configured to perform classification training on the extracted features through a preset classifier, set the trained classifier as the gesture recognition model, and generate the control gesture library; and字典构建单元,用于根据所述控制手势库中的控制手势与所述飞行控制指令的对应关系,构建所述手势指令字典。A dictionary construction unit, configured to construct the gesture instruction dictionary according to the correspondence between the control gestures in the control gesture library and the flight control instructions.7.如权利要求6所述的装置,其特征在于,所述特征提取单元包括:7. The device according to claim 6, wherein the feature extraction unit comprises:图像特征提取单元,用于通过预设的卷积神经网络与限制玻尔兹曼机联合网络,对所述用于训练的手势图像进行无监督特征提取和有监督特征提取;The image feature extraction unit is used to perform unsupervised feature extraction and supervised feature extraction on the gesture image used for training through the preset convolutional neural network and restricted Boltzmann machine joint network;所述模型训练单元包括:The model training unit includes:图像特征分类单元,用于通过所述分类器对所述无监督特征提取到的图像特征和所述有监督特征提取到的图像特征进行分类训练。An image feature classification unit, configured to perform classification training on the image features extracted from the unsupervised features and the image features extracted from the supervised features through the classifier.8.如权利要求5所述的装置,其特征在于,所述装置还包括:8. The device of claim 5, further comprising:摄像头调整单元,用于根据所述用户手势在所述场景图像中的位置变化,调整所述摄像头的拍摄角度,以控制所述摄像头始终面向所述用户手势。The camera adjustment unit is configured to adjust the shooting angle of the camera according to the position change of the user gesture in the scene image, so as to control the camera to always face the user gesture.9.一种无人机,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至4任一项所述方法的步骤。9. An unmanned aerial vehicle, comprising a memory, a processor, and a computer program stored in the memory and operable on the processor, characterized in that, when the processor executes the computer program, it realizes The steps of the method described in any one of claims 1 to 4.10.一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至4任一项所述方法的步骤。10. A computer-readable storage medium, the computer-readable storage medium storing a computer program, characterized in that, when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 4 are implemented .
CN201710778269.6A2017-09-012017-09-01 Flight control method and device of unmanned aerial vehicle, unmanned aerial vehicle and storage mediumPendingCN107741781A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201710778269.6ACN107741781A (en)2017-09-012017-09-01 Flight control method and device of unmanned aerial vehicle, unmanned aerial vehicle and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201710778269.6ACN107741781A (en)2017-09-012017-09-01 Flight control method and device of unmanned aerial vehicle, unmanned aerial vehicle and storage medium

Publications (1)

Publication NumberPublication Date
CN107741781Atrue CN107741781A (en)2018-02-27

Family

ID=61235859

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201710778269.6APendingCN107741781A (en)2017-09-012017-09-01 Flight control method and device of unmanned aerial vehicle, unmanned aerial vehicle and storage medium

Country Status (1)

CountryLink
CN (1)CN107741781A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108460354A (en)*2018-03-092018-08-28深圳臻迪信息技术有限公司Unmanned aerial vehicle (UAV) control method, apparatus, unmanned plane and system
CN108594995A (en)*2018-04-132018-09-28广东小天才科技有限公司Electronic equipment operation method based on gesture recognition and electronic equipment
CN108921811A (en)*2018-04-032018-11-30阿里巴巴集团控股有限公司Detect method and apparatus, the article damage detector of article damage
CN109782906A (en)*2018-12-282019-05-21深圳云天励飞技术有限公司A kind of gesture identification method of advertisement machine, exchange method, device and electronic equipment
CN109799838A (en)*2018-12-212019-05-24金季春A kind of training method and system
CN109814717A (en)*2019-01-292019-05-28珠海格力电器股份有限公司Household equipment control method and device, control equipment and readable storage medium
CN111291634A (en)*2020-01-172020-06-16西北工业大学 Object detection method of UAV image based on convolution restricted Boltzmann machine
CN111300402A (en)*2019-11-262020-06-19爱菲力斯(深圳)科技有限公司Robot control method based on gesture recognition
CN111461267A (en)*2019-03-292020-07-28太原理工大学 A Gesture Recognition Method Based on RFID Technology
WO2020253475A1 (en)*2019-06-192020-12-24上海商汤智能科技有限公司Intelligent vehicle motion control method and apparatus, device and storage medium
CN112203903A (en)*2018-05-222021-01-08日产自动车株式会社Control device and control method for vehicle-mounted equipment
CN112732083A (en)*2021-01-052021-04-30西安交通大学Unmanned aerial vehicle intelligent control method based on gesture recognition
CN113191184A (en)*2021-03-022021-07-30深兰科技(上海)有限公司Real-time video processing method and device, electronic equipment and storage medium
CN113342170A (en)*2021-06-112021-09-03北京字节跳动网络技术有限公司Gesture control method, device, terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104808799A (en)*2015-05-202015-07-29成都通甲优博科技有限责任公司Unmanned aerial vehicle capable of indentifying gesture and identifying method thereof
CN105827900A (en)*2016-03-312016-08-03纳恩博(北京)科技有限公司Data processing method and electronic device
CN106227341A (en)*2016-07-202016-12-14南京邮电大学Unmanned plane gesture interaction method based on degree of depth study and system
CN106774945A (en)*2017-01-242017-05-31腾讯科技(深圳)有限公司A kind of aircraft flight control method, device, aircraft and system
CN107071389A (en)*2017-01-172017-08-18亿航智能设备(广州)有限公司Take photo by plane method, device and unmanned plane

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104808799A (en)*2015-05-202015-07-29成都通甲优博科技有限责任公司Unmanned aerial vehicle capable of indentifying gesture and identifying method thereof
CN105827900A (en)*2016-03-312016-08-03纳恩博(北京)科技有限公司Data processing method and electronic device
CN106227341A (en)*2016-07-202016-12-14南京邮电大学Unmanned plane gesture interaction method based on degree of depth study and system
CN107071389A (en)*2017-01-172017-08-18亿航智能设备(广州)有限公司Take photo by plane method, device and unmanned plane
CN106774945A (en)*2017-01-242017-05-31腾讯科技(深圳)有限公司A kind of aircraft flight control method, device, aircraft and system

Cited By (17)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108460354A (en)*2018-03-092018-08-28深圳臻迪信息技术有限公司Unmanned aerial vehicle (UAV) control method, apparatus, unmanned plane and system
CN108921811A (en)*2018-04-032018-11-30阿里巴巴集团控股有限公司Detect method and apparatus, the article damage detector of article damage
US10929717B2 (en)2018-04-032021-02-23Advanced New Technologies Co., Ltd.Article damage detection
CN108594995A (en)*2018-04-132018-09-28广东小天才科技有限公司Electronic equipment operation method based on gesture recognition and electronic equipment
CN112203903A (en)*2018-05-222021-01-08日产自动车株式会社Control device and control method for vehicle-mounted equipment
CN109799838A (en)*2018-12-212019-05-24金季春A kind of training method and system
CN109782906A (en)*2018-12-282019-05-21深圳云天励飞技术有限公司A kind of gesture identification method of advertisement machine, exchange method, device and electronic equipment
CN109814717B (en)*2019-01-292020-12-25珠海格力电器股份有限公司Household equipment control method and device, control equipment and readable storage medium
CN109814717A (en)*2019-01-292019-05-28珠海格力电器股份有限公司Household equipment control method and device, control equipment and readable storage medium
CN111461267A (en)*2019-03-292020-07-28太原理工大学 A Gesture Recognition Method Based on RFID Technology
CN111461267B (en)*2019-03-292023-04-18太原理工大学Gesture recognition method based on RFID technology
WO2020253475A1 (en)*2019-06-192020-12-24上海商汤智能科技有限公司Intelligent vehicle motion control method and apparatus, device and storage medium
CN111300402A (en)*2019-11-262020-06-19爱菲力斯(深圳)科技有限公司Robot control method based on gesture recognition
CN111291634A (en)*2020-01-172020-06-16西北工业大学 Object detection method of UAV image based on convolution restricted Boltzmann machine
CN112732083A (en)*2021-01-052021-04-30西安交通大学Unmanned aerial vehicle intelligent control method based on gesture recognition
CN113191184A (en)*2021-03-022021-07-30深兰科技(上海)有限公司Real-time video processing method and device, electronic equipment and storage medium
CN113342170A (en)*2021-06-112021-09-03北京字节跳动网络技术有限公司Gesture control method, device, terminal and storage medium

Similar Documents

PublicationPublication DateTitle
CN107741781A (en) Flight control method and device of unmanned aerial vehicle, unmanned aerial vehicle and storage medium
CN107239728B (en)Unmanned aerial vehicle interaction device and method based on deep learning attitude estimation
CN105807926B (en) A UAV Human-Computer Interaction Method Based on 3D Continuous Dynamic Gesture Recognition
CN106598226B (en)A kind of unmanned plane man-machine interaction method based on binocular vision and deep learning
CN106227341A (en)Unmanned plane gesture interaction method based on degree of depth study and system
CN108200334B (en) Image capturing method, device, storage medium and electronic device
WO2018145650A1 (en)Aircraft and control method therefor
CN109977739A (en)Image processing method, image processing device, storage medium and electronic equipment
CN107168527A (en)The first visual angle gesture identification and exchange method based on region convolutional neural networks
CN107909061A (en)A kind of head pose tracks of device and method based on incomplete feature
CN110471526A (en)A kind of human body attitude estimates the unmanned aerial vehicle (UAV) control method in conjunction with gesture identification
CN105159452B (en)A kind of control method and system based on human face modeling
CN107748860A (en)Method for tracking target, device, unmanned plane and the storage medium of unmanned plane
CN105912980A (en)Unmanned plane and unmanned plane system
CN108898063A (en)A kind of human body attitude identification device and method based on full convolutional neural networks
CN106973221B (en)Unmanned aerial vehicle camera shooting method and system based on aesthetic evaluation
CN103500335A (en)Photo shooting and browsing method and photo shooting and browsing device based on gesture recognition
US10971152B2 (en)Imaging control method and apparatus, control device, and imaging device
CN108664887A (en)Prior-warning device and method are fallen down in a kind of virtual reality experience
CN110555404A (en)Flying wing unmanned aerial vehicle ground station interaction device and method based on human body posture recognition
CN107831791A (en)Unmanned aerial vehicle control method and device, control equipment and storage medium
Maher et al.Realtime human-UAV interaction using deep learning
CN110807391A (en) Vision-based human gesture command recognition method for human-UAV interaction
Pu et al.Aerial face recognition and absolute distance estimation using drone and deep learning
CN112183155A (en)Method and device for establishing action posture library, generating action posture and identifying action posture

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20180227

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp