Movatterモバイル変換


[0]ホーム

URL:


CN106292705A - Many rotor wing unmanned aerial vehicles idea remote control system based on Bluetooth brain wave earphone and operational approach - Google Patents

Many rotor wing unmanned aerial vehicles idea remote control system based on Bluetooth brain wave earphone and operational approach
Download PDF

Info

Publication number
CN106292705A
CN106292705ACN201610824357.0ACN201610824357ACN106292705ACN 106292705 ACN106292705 ACN 106292705ACN 201610824357 ACN201610824357 ACN 201610824357ACN 106292705 ACN106292705 ACN 106292705A
Authority
CN
China
Prior art keywords
eeg
sigma
uav
wave
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610824357.0A
Other languages
Chinese (zh)
Other versions
CN106292705B (en
Inventor
焦越
阳媛
马群
郭晓艺
吴佳玲
曾欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast UniversityfiledCriticalSoutheast University
Priority to CN201610824357.0ApriorityCriticalpatent/CN106292705B/en
Publication of CN106292705ApublicationCriticalpatent/CN106292705A/en
Application grantedgrantedCritical
Publication of CN106292705BpublicationCriticalpatent/CN106292705B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于脑电机器学习的无人机意念遥操作系统及操作方法,其中操作系统包括一脑电感知模块,包括图像采集装置以及脑电测量设备,图像采集装置用于接收无人机的飞行状态及周围的环境信息,脑电测量设备用于获取无人机操作者脑部激发的大脑电流脉冲信号;一信号处理模块,从脑电感知模块获取的大脑电流脉冲信号中分离出四种需要的脑电信号;一深度学习模块,将分离出四种脑电信号作为输入进行识别并输出无人机操作指令;一地面站控制模块,根据所述深度学习模块输出的无人机操作指令无人机进行操作。本发明针对无人机控制所设计的脑电信号特征模式,使得控制无人机变得更为简单、可靠。

The invention discloses an unmanned aerial vehicle mind remote operating system and operation method based on EEG machine learning, wherein the operating system includes an EEG perception module, an image acquisition device and an EEG measurement device, and the image acquisition device is used to receive wireless The flight state of the man-machine and the surrounding environment information, the EEG measurement equipment is used to obtain the brain current pulse signal stimulated by the brain of the drone operator; a signal processing module, which is separated from the brain current pulse signal obtained by the EEG perception module Four kinds of required EEG signals; a deep learning module, will separate the four kinds of EEG signals as input to identify and output UAV operation instructions; a ground station control module, according to the output of the deep learning module The machine operator instructs the drone to operate. The EEG signal characteristic pattern designed for the control of the drone in the present invention makes it easier and more reliable to control the drone.

Description

Translated fromChinese
基于蓝牙脑电耳机的多旋翼无人机意念遥操作系统及操作方法Telecontrol system and operation of multi-rotor UAV based on bluetooth EEG headsetmethod

技术领域technical field

本发明涉及实时检测使用者的心理指令并控制多旋翼无人机的方法和设备,特别是包含一种基于Emotiv Insight脑电耳机的多旋翼无人机飞控的意念控制系统和控制方法。The present invention relates to a method and device for real-time detection of a user's psychological command and control of a multi-rotor UAV, in particular an idea control system and control method for multi-rotor UAV flight control based on an Emotiv Insight EEG headset.

背景技术Background technique

可穿戴设备是近年来一项重要发明,结合了云计算、数据交互、软件处理等强大IT技术,给人们的生活带来极大便利,呈现出一种全新的智感知效果,诸如可视频、可拍照的GoogleGlass,可检测人体健康指数的Nike+等等。“意念头箍”的发明结合了生物信息学、神经学、信号处理技术、机器学习等知识,是智能穿戴技术的又一大新兴应用。Wearable devices are an important invention in recent years. They combine powerful IT technologies such as cloud computing, data interaction, and software processing to bring great convenience to people's lives and present a brand-new intelligent perception effect, such as video, Google Glass that can take pictures, Nike+ that can detect human health index, etc. The invention of the "idea headband" combines bioinformatics, neurology, signal processing technology, machine learning and other knowledge, and is another emerging application of smart wearable technology.

脑机交互研究发展迅速,配合智能穿戴设备的发展,逐渐让大脑操控行为这一设想变为现实。目前科技已达到能够戴上一个“意念头箍”,然后命令眼前的物品移动。EmotivInsight脑电波检测耳机是一款轻巧、多频段无线耳机,它可以监测佩戴者的脑部活动,或者直接将读取的脑电波转化为有用的命令。它可以检测到一些基本命令,例如控制一台遥控直升机:前进、后退、上升等,此外,这款耳机还能靠检测你的表情来确定命令,例如控制一台车:微笑就是前进、眨右眼就是向右转。The rapid development of brain-computer interaction research, coupled with the development of smart wearable devices, has gradually made the idea of brain-controlled behavior a reality. At present, the technology has reached the level of being able to wear a "mind headband" and then order the objects in front of you to move. The EmotivInsight brainwave detection headset is a lightweight, multi-band wireless headset that can monitor the wearer's brain activity or directly translate the brainwave readings into useful commands. It can detect some basic commands, such as controlling a remote control helicopter: forward, backward, up, etc. In addition, this headset can also determine commands by detecting your expression, such as controlling a car: smiling means moving forward, blinking right The eye is turned to the right.

本发明将基于Emotiv公司最新研发的Emotiv Insight脑电耳机采集使用者的脑电信号,并通过蓝牙技术将数据发送给电脑,经过对脑电信号的训练和识别,再联合耳机的信号质量对识别结果通过算法进行筛选,获取到较为稳定的识别结果。最终将识别结果通过地面站传输给无人机,从而实现意念控制无人机的功能。The present invention collects the EEG signal of the user based on the Emotiv Insight EEG headset newly developed by Emotiv, and sends the data to the computer through Bluetooth technology. The results are screened through the algorithm, and a relatively stable recognition result is obtained. Finally, the recognition result is transmitted to the UAV through the ground station, so as to realize the function of mind control UAV.

发明内容Contents of the invention

本发明所要解决的技术问题是针对上述现有技术存在的不足,而提供一种具有灵活性、和可操作性的使用脑电信号去控制多旋翼无人机的操作系统及方法。The technical problem to be solved by the present invention is to provide a flexible and operable operating system and method for controlling multi-rotor UAVs using EEG signals.

本发明还提供了一种联合脑电信号质量的识别筛选方法,使得低精度的识别结果可以用于控制要求精度较高的无人机飞控系统。The present invention also provides a recognition and screening method combined with EEG signal quality, so that the low-precision recognition result can be used to control the UAV flight control system that requires high precision.

为了解决上述技术问题,本发明采用的技术方案为:In order to solve the problems of the technologies described above, the technical solution adopted in the present invention is:

基于脑电机器学习的无人机意念遥操作系统,其特征在于,包括:The teleoperating system of UAV mind based on brain electrical machine learning is characterized in that it includes:

一脑电感知模块,包括图像采集装置以及脑电测量设备,所述图像采集装置用于获取无人机的飞行状态及周围的环境信息,所述脑电测量设备用于获取无人机操作者脑部激发的大脑电流脉冲信号;An EEG perception module, including an image acquisition device and an EEG measurement device, the image acquisition device is used to obtain the flight state of the drone and the surrounding environment information, and the EEG measurement device is used to obtain the drone operator's Cerebral electrical pulse signals stimulated by the brain;

一终端,从所述脑电感知模块获取的大脑电流脉冲信号中分离出四种需要的脑电信号δ波、θ波、α波、β波;One terminal, separates four kinds of required EEG signal delta waves, theta waves, alpha waves, and beta waves from the brain current pulse signals obtained by the EEG sensing module;

一深度学习模块,将分离出四种脑电信号δ波、θ波、α波、β波作为输入进行识别并输出无人机操作指令;A deep learning module that separates four types of EEG signals, δ wave, θ wave, α wave, and β wave, as input for identification and output of drone operation instructions;

一地面站控制模块,根据所述深度学习模块输出的无人机操作指令无人机进行操作。A ground station control module, which instructs the UAV to operate according to the UAV operation command output by the deep learning module.

所述深度学习模块通过基于BP神经网络模型的线上学习,识别向前、向后、向左、向右的四种旋翼无人机飞行模式,并用脑电信号强弱来控制油门大小使得飞行器上升或下降;联合信号质量,对识别结果进行筛选,获取最终操作指令。The deep learning module recognizes the four rotor UAV flight modes of forward, backward, left, and right through online learning based on the BP neural network model, and uses the strength of the EEG signal to control the size of the throttle so that the aircraft Rise or fall; combine signal quality, screen the recognition results, and obtain the final operation instruction.

基于BP神经网络模型的线上学习方法,使用bagging算法生成个体网络,将BP神经网络作为分类模型对样本进行离线学习;对于所有个体网络的输出,通过建立几何模型,计算决策重心的方法进行集成;最后将集成后的结果通过联合脑电信号进行筛选,获得可以用于控制无人机的较为稳定的控制指令。Based on the online learning method of the BP neural network model, the bagging algorithm is used to generate individual networks, and the BP neural network is used as a classification model for offline learning of samples; for the output of all individual networks, the method of establishing a geometric model and calculating the center of gravity of the decision is integrated ; Finally, the integrated results are screened through the joint EEG signal to obtain a relatively stable control command that can be used to control the drone.

一种采用上述任一所述基于脑电机器学习的无人机意念遥操作系统的无人机意念遥操作方法,其特征在于,包括以下步骤:A method for remote operation of the UAV mind using any of the above-mentioned UAV mind teleoperation systems based on brain electrical machine learning, characterized in that it comprises the following steps:

第一步,读取脑电耳机获取的大脑电流脉冲信号并传至终端;The first step is to read the brain current pulse signal obtained by the EEG headset and transmit it to the terminal;

第二步,终端对第一步收到的原始脑电信号通过离散短时傅里叶变换的方法,进行特征提取和去除其中的干扰信号,分离出四种需要的脑电信号δ波、θ波、α波及β波并存入数据库,作为训练样本;In the second step, the terminal performs feature extraction and removes interference signals from the original EEG signal received in the first step through discrete short-time Fourier transform, and separates four required EEG signals δ wave, θ waves, α waves and β waves and store them in the database as training samples;

第三步,使用Bagging算法生成集成神经网络中的个体网络,使用BP神经网络作为分类模型对个体网络中的样本进行离线学习;The third step is to use the Bagging algorithm to generate the individual network in the integrated neural network, and use the BP neural network as a classification model to perform offline learning on the samples in the individual network;

第四步,建立几何模型,计算所有个体网络的决策重心,得到集成网络结果;The fourth step is to establish a geometric model, calculate the decision center of gravity of all individual networks, and obtain the integrated network results;

第五步,将集成网络的结果通过联合脑电信号质量的识别筛选算法,获取到可以用于无人机控制的较为稳定的识别结果;The fifth step is to pass the results of the integrated network through the recognition and screening algorithm of the joint EEG signal quality to obtain relatively stable recognition results that can be used for UAV control;

第六步,将识别结果发送给地面站控制模块控制无人机。The sixth step is to send the recognition result to the ground station control module to control the UAV.

5、根据权利要求4所述的无人机意念遥操作方法,其特征在于,所述步骤六具体包括:5. The method for remote operation of drones according to claim 4, wherein the step six specifically includes:

将识别结果通过地面站发送给多旋翼无人机飞控;Send the recognition result to the flight controller of the multi-rotor UAV through the ground station;

多旋翼无人机飞控对地面站发来的数据进行解析,控制多旋翼无人机飞行。The multi-rotor UAV flight controller analyzes the data sent by the ground station and controls the flight of the multi-rotor UAV.

所述第二步包括以下过程:The second step includes the following processes:

利用离散短时傅里叶变换的方法,将脑电信号EEG从时域变换到频域,进行特征提取和去除其中的干扰信号,将脑电中δ波、θ波、α波及β波三种脑电信号提取出来,离散短时傅里叶变换公式如下:Using the method of discrete short-time Fourier transform, the EEG signal EEG is transformed from the time domain to the frequency domain, and the features are extracted and the interference signals are removed. The EEG signal is extracted, and the discrete short-time Fourier transform formula is as follows:

SSTTFfTT{{xx[[nno]]}}((mm,,nno))==Xx((wwkk))==ΣΣnno==00RR++11xx[[nno]]·&Center Dot;((0.538360.53836--0.461640.46164ccoosthe s((22ππ((nno--mm))RR--11))))·&Center Dot;ee--jwjwkknno

x[n]是输入的离散信号,即原始脑电信号EEG;X(wk)是x[n]w(n-m)的短时傅里叶变化结果;R表示窗口长度;wk是固定的中心频率;x[n] is the input discrete signal, that is, the original EEG signal; X(wk ) is the short-time Fourier change result of x[n]w(nm); R represents the window length; wk is fixed Center frequency;

将窗口长度R设为2s,每次采样1024个点;根据delta:1-4Hz,theta:4-7Hz,alpha:8-13Hz,beta:13-30Hz各自的频率段,将固定中心频率wk分别设为w1=2.5Hz,w2=5.5Hz,w3=10.5Hz,w4=21.5Hz带入上面的变换公式,即可频域中提取分离得到δ波、θ波、α波及β波各自的频率谱,分别表示为Xd(w1),Xt(w2),Xa(w3),Xb(w4),利用短时傅里叶反变换,公式如下:Set the window length R to 2s, and sample 1024 points each time; according to the respective frequency segments of delta: 1-4Hz, theta: 4-7Hz, alpha: 8-13Hz, beta: 13-30Hz, the center frequency wk will be fixed Set w1 = 2.5Hz, w2 = 5.5Hz, w3 = 10.5Hz, w4 = 21.5Hz and put them into the above transformation formula, that is, extract and separate in the frequency domain to obtain δ wave, θ wave, α wave and β wave The respective frequency spectrums of the waves are expressed as Xd (w1 ), Xt (w2 ), Xa (w3 ), Xb (w4 ), respectively, using the inverse short-time Fourier transform, the formula is as follows:

DD.((nno))==11LLΣΣmmΣΣnnoLL--11Xxdd((ww11))eejj22ππLLnno

TT((nno))==11LLΣΣmmΣΣnnoLL--11Xxtt((ww22))eejj22ππLLnno

AA((nno))==11LLΣΣmmΣΣnnoLL--11Xxαα((ww33))eejj22ππLLnno

BB((nno))==11LLΣΣmmΣΣnnoLL--11Xxbb((ww44))eejj22ππLLnno

L为频率采样点数;L is the number of frequency sampling points;

即可得到时域中δ波、θ波、α波及β波的实时变化值D(n),T(n),A(n),B(n);将实时变化值D(n),T(n),A(n),B(n)存入数据库,作为一次样本;通过神经网络模型的训练,得到样本集S={xi|i=1,2,3…N},其中xi为单个训练样本,包括D(n),T(n),A(n),B(n)以及对应的理想输出结果,N为训练样本个数。The real-time change values D(n), T(n), A(n), B(n) of the delta wave, theta wave, alpha wave and beta wave in the time domain can be obtained; the real-time change values D(n), T (n), A(n), B(n) are stored in the database as a sample; through the training of the neural network model, a sample set S={xi |i=1,2,3...N} is obtained, where xi is a single training sample, including D(n), T(n), A(n), B(n) and the corresponding ideal output results, and N is the number of training samples.

所述第三步包括以下过程:The third step includes the following processes:

在原始样本集中通过bootstrap技术随机抽取样本构成M个子训练集{Sm|m=1,2,3…M},子集的训练规模通常与原始训练集相当,样本允许重复选择;In the original sample set, samples are randomly selected by bootstrap technology to form M sub-training sets {Sm |m=1, 2, 3...M}. The training scale of the subset is usually equivalent to the original training set, and samples are allowed to be repeatedly selected;

使用BP神经网络对各个子训练集进行训练:设子训练集Sm的理想输出为O={Om}1≤m≤M,定义误差函数Use the BP neural network to train each sub-training set: set the ideal output of the sub-training set Sm as O={Om }1≤m≤M , define the error function

EE.((WW,,ww))==1122||||Oo--ζζ||||22==1122ΣΣmm==11Mm[[Oomm--gg((ΣΣpp==11PPWWmmppgg((ΣΣnno==11NNwwppmmξξnno))))]]22

其中,W={Wmp}1≤m≤M,1≤p≤P和w={wpn}1≤p≤P,1≤n≤N分别为输出层与隐层之间的权矩阵和隐层和输入层之间的权矩阵,ξ=(ξ1,…,ξn)T∈Rn为输入样本,Among them, W={Wmp }1≤m≤M, 1≤p≤P and w={wpn }1≤p≤P, 1≤n≤N are the weight matrix and The weight matrix between the hidden layer and the input layer, ξ=(ξ1 ,…,ξn )T ∈ Rn is the input sample,

ζζ==gg((WWmm·&Center Dot;ττ))==gg((ΣΣpp==11PPWWmmppττpp)),,mm==11,,22,,......,,Mm

为网络实际输出;is the actual output of the network;

ττpp==gg((wwpp··ξξ))==gg((ΣΣnno==11NNwwppnnoξξnno)),,pp==11,,......,,PP

为网络的隐层输出;is the hidden layer output of the network;

对当前权值Wk和wk定义权值的增量为For the current weights Wk and wk , define the weight increment as

WWmmppkk==--ηη∂∂EE.WWmmpp++αWαWmmppkk--11,,pp==11,,......,,PP;;mm==11,,22,,......,,Mm;;kk≥&Greater Equal;11

wwppnnokk==--ηη∂∂EE.wwppnno++αwαwppnnokk--11,,pp==11,,......,,PP;;nno==11,,22,,......,,NN;;kk≥&Greater Equal;11

其中α为动量项因子。where α is the momentum term factor.

所述第四步包括以下过程:The fourth step includes the following processes:

设各个分类器的预测值为hi∈{0,1,2,3,4},分别代表识别动作类型为中性、前、后、左、右,将预测值映射为二维向量ti(x,y),定义中性状态为(0,0),向前为(0,1),向后为(0,-1),向左为(-1,0),向右为(1,0);计算Let the prediction value of each classifier be hi ∈ {0,1,2,3,4}, which respectively represent the recognition action types as neutral, front, back, left, and right, and map the predicted value to a two-dimensional vector ti (x,y), define the neutral state as (0,0), forward as (0,1), backward as (0,-1), leftward as (-1,0), rightward as ( 1,0); Calculate

tt((xxtt,,ythe ytt))==11MmΣΣii==11Mmttnnopp==||||tt((xxtt,,ythe ytt))||||==xxtt22++ythe ytt22

其中,M为子分类器的数量,若p的值小于预先设定的阈值,则认为集成分类器的输出为中性状态;若p的值大于预先设定的阈值,则根据t(xt,yt)在平面中与四个坐标轴的位置关系来确定集成分类器的输出。Among them, M is the number of sub-classifiers, if the value of p is less than the preset threshold, the output of the integrated classifier is considered to be neutral; if the value of p is greater than the preset threshold, according to t(xt , yt ) in the plane with the positional relationship of the four coordinate axes to determine the output of the integrated classifier.

所述第五步包括以下过程:The fifth step includes the following processes:

每隔固定时间T采集一次耳机的信号质量,并返回一次识别结果,设定滑动窗口大小为N,保存包括当前数据的前N次数据;Collect the signal quality of the earphone every fixed time T, and return a recognition result, set the sliding window size to N, and save the previous N data including the current data;

将第四步中集成分类器的输出再次映射为二维向量Tn(x,y),定义中性状态为(0,0),向前为(0,1),向后为(0,-1),向左为(-1,0),向右为(1,0);同时记录动作强度为Pn;耳机采集到的信号质量包括无线信号质量Sn(Sn=0,1,2)和5个电极的信号质量;对5个电极的信号质量求算数平均,得到Qn(0≤Qn≤4);Map the output of the integrated classifier in the fourth step to a two-dimensional vector Tn (x,y) again, define the neutral state as (0,0), forward as (0,1), and backward as (0, -1), to the left is (-1,0), to the right is (1,0); at the same time, the action intensity is recorded as Pn ; the signal quality collected by the earphone includes the wireless signal quality Sn (Sn =0,1 ,2) and the signal quality of 5 electrodes; calculate the arithmetic average of the signal quality of 5 electrodes to obtain Qn (0≤Qn ≤4);

联合耳机的信号质量,对识别结果的筛选方法如下:Combined with the signal quality of the headset, the screening method for the recognition results is as follows:

(1)Sn<2时:将识别结果重置为中性,强度为0;(1) When Sn < 2: reset the recognition result to neutral and the intensity to 0;

(2)Sn=2时:计算电极在一定时间内的平均信号强度(2) When Sn = 2: Calculate the average signal strength of the electrode within a certain period of time

QQ&OverBar;&OverBar;==11NN&Sigma;&Sigma;ii==nno--NN++11nnoQQnno

设定强度警戒值Qt,若系统提示使用者重新佩戴耳机,并将识别结果重置为中Set the intensity warning value Qt , if The system prompts the user to put on the headset again, and resets the recognition result to medium

性,强度为0;若计算有效识别结果如下property, strength is 0; if Calculate the effective recognition results as follows

TT((xxtt,,ythe ytt))==1144NN&Sigma;&Sigma;ii==nno--NN++11nnoQQnno&CenterDot;&Center Dot;PPnno&CenterDot;&Center Dot;TTnnoPP==||||TT((xxtt,,ythe ytt))||||==xxtt22++ythe ytt22

选定识别范围角计算Selected recognition range angle calculate

a)若0≤θt<θ或2π-θ≤θt<2π,识别结果为向右,强度P=T·x0a) If 0≤θt <θ or 2π-θ≤θt <2π, the recognition result is to the right, and the intensity P=T x0 ;

b)若识别结果为向前,强度P=T·y0b) if The recognition result is forward, the intensity P=T·y0 ;

c)若π-θ≤θt<π+θ,识别结果为向左,强度P=T·x0c) If π-θ≤θt <π+θ, the recognition result is to the left, and the intensity P=T x0 ;

d)若识别结果为向后,强度P=T·y0d) if The recognition result is backward, and the intensity P=T·y0 ;

其中,x0、y0分别为x轴和y轴的单位向量;Among them, x0 and y0 are the unit vectors of the x-axis and y-axis respectively;

最后,设定强度阈值S,舍去强度P<S的识别结果。Finally, the intensity threshold S is set, and the identification results with intensity P<S are discarded.

控制系统采用实时控制多旋翼无人机的方式,实时控制的具体方法是:The control system adopts the method of real-time control of the multi-rotor UAV. The specific method of real-time control is:

地面站使用了Paparazzi系统与无人机通信;实际控制流程中,PC终端将脑电识别结果转化为标准数据报文,并写入数传电台,最终通过PPRZLINK协议发送给无人机;无人机飞控接收到PPRZLINK报文后,对其中的数据域进行解析,最终将识别结果转化为控制指令。The ground station uses the Paparazzi system to communicate with the UAV; in the actual control process, the PC terminal converts the EEG recognition result into a standard data message, writes it into the data transmission station, and finally sends it to the UAV through the PPRZLINK protocol; After the aircraft flight controller receives the PPRZLINK message, it analyzes the data field in it, and finally converts the recognition result into a control command.

本发明基于蓝牙脑电耳机的多旋翼无人机意念控制系统,采用非侵入式脑电传感器Emotiv Insight模块,感知测试者因环境(无人机飞行真实场景或虚拟现实眼镜第一人称主视角FPV场景)和心理(意念)所激发的大脑电流脉冲(即脑电EEG信号);使用神经网络集成技术中的BP神经网络和Bagging算法,实现面向无人机控制的脑电信号模式分类(即识别向前、向后、向左、向右的四种无人机飞行模式,并用脑电信号强弱来控制油门大小从而使得飞行器上升或下降);通过筛选算法,对分类结果进行筛选,获得稳定的识别结果。最后,将脑电识别结果通过数传电台与飞控进行通信,对微型多旋翼无人机实现航迹与速度的遥操作。The present invention is based on the multi-rotor unmanned aerial vehicle idea control system of the bluetooth EEG earphone, adopts the Emotiv Insight module of the non-invasive EEG sensor, and the perception tester is affected by the environment (the real scene of the drone flight or the first-person main perspective FPV scene of the virtual reality glasses) ) and mental (idea) inspired brain current pulses (i.e. EEG signals); use the BP neural network and Bagging algorithm in the neural network integration technology to realize the classification of the EEG signal patterns for UAV control (i.e. Four UAV flight modes: forward, backward, left, and right, and the strength of the brain signal is used to control the throttle to make the aircraft rise or fall); through the screening algorithm, the classification results are screened to obtain a stable recognition result. Finally, the EEG recognition result is communicated with the flight controller through the digital transmission station, and the remote operation of the trajectory and speed of the micro multi-rotor UAV is realized.

本发明利用FPV第一人称主视角进行浸入式学习,提高了脑电信号识别率;使用神经网络集成技术,可以简单的通过训练多个神经网络并将其结果进行合成显著的提高学习的泛华能力,实践结果远远好于单个神经网络的学习效果;通过建立几何模型,计算决策重心,实现了四种模式下的决策,不仅解决了子网络中结果的集成方法,还克服了识别率较低、识别不精准的问题,使得用精度较低的脑电识别结果控制要求精度较高的无人机成为可能。本发明针对无人机控制所设计的脑电信号特征模式,使得控制无人机变得更为简单、可靠。除了控制无人机,本发明将在智能控制领域有更为广阔的发展前景。The present invention uses the FPV first-person perspective to carry out immersive learning, which improves the recognition rate of EEG signals; using neural network integration technology, it is possible to simply train multiple neural networks and synthesize the results to significantly improve the pan-Chinese ability of learning , the practical results are far better than the learning effect of a single neural network; by establishing a geometric model and calculating the decision-making center of gravity, the decision-making under the four modes is realized, which not only solves the integration method of the results in the sub-network, but also overcomes the low recognition rate 1. The problem of inaccurate recognition makes it possible to use low-precision EEG recognition results to control drones that require high precision. The EEG signal characteristic pattern designed for the control of the drone in the present invention makes it easier and more reliable to control the drone. In addition to controlling unmanned aerial vehicles, the present invention will have broader development prospects in the field of intelligent control.

附图说明Description of drawings

图1是本发明基于脑电耳机的多旋翼无人机意念控制系统的整体框架图。Fig. 1 is the overall frame diagram of the multi-rotor unmanned aerial vehicle idea control system based on EEG earphone of the present invention.

图2是本发明的脑电耳机的信号采集模型图。Fig. 2 is a signal acquisition model diagram of the EEG earphone of the present invention.

图3是本发明的四旋翼飞行器的硬件结构框图。Fig. 3 is a block diagram of the hardware structure of the quadrotor aircraft of the present invention.

图4是本发明的整体接口设计图。Fig. 4 is an overall interface design diagram of the present invention.

图5是本发明的短时傅里叶变换时海明窗函数时域和频域的函数图。Fig. 5 is a functional diagram of the Hamming window function in the time domain and frequency domain during the short-time Fourier transform of the present invention.

图6基于Bagging的神经网络集成结构示意图。Fig. 6 is a schematic diagram of the neural network integration structure based on Bagging.

图7是本发明的脑电训练的程序框图。Fig. 7 is a program block diagram of EEG training in the present invention.

具体实施方式detailed description

下面结合附图对本发明作更进一步的说明。The present invention will be further described below in conjunction with the accompanying drawings.

本发明一种利用脑电波作为特征的机器学习方法并用于多旋翼无人机的实时控制系统。本发明使用的硬件设备包含一个脑电耳机、一架四旋翼飞行器、一副虚拟现实(VR)眼镜,其中:The invention is a machine learning method using brain waves as features and is used for a real-time control system of a multi-rotor UAV. The hardware equipment used in the present invention comprises an EEG headset, a quadrotor aircraft, and a pair of virtual reality (VR) glasses, wherein:

脑电耳机,是采用美国Emotiv公司生产研发的Emotiv Insight蓝牙脑电耳机。该装置5个EEG(脑电波)探测器和2个标准传感器,可以捕获前额皮质(负责执行动作)、顶颞(负责听力和协调)和枕骨(负责视觉)三部分区域的脑电波。耳机使用干电极监测AF3,AF4,T7,T8,PZ脑区,并以128Hz的频率进行取样,同时将其转换为数字形式,通过蓝牙传输给在PC上运行EmotivEmoEngine(专门的处理软件组件)将信号最终传给Emotiv应用程序编程接口(Emotiv API)。脑电耳机上集成无线蓝牙模块,支持蓝牙A2DP协议,有效通讯距离为10米,波特率为115200bps。The EEG headset adopts the Emotiv Insight Bluetooth EEG headset produced and developed by Emotiv Corporation of the United States. The device has 5 EEG (brain wave) detectors and 2 standard sensors, which can capture brain waves in the prefrontal cortex (responsible for performing actions), parietotemporal (responsible for hearing and coordination) and occipital (responsible for vision). The earphones use dry electrodes to monitor AF3, AF4, T7, T8, and PZ brain regions, and take samples at a frequency of 128Hz, and convert them into digital forms at the same time, and transmit them to EmotivEmoEngine (a special processing software component) running on a PC via Bluetooth. Signals are finally passed to the Emotiv Application Programming Interface (Emotiv API). The EEG headset integrates a wireless Bluetooth module, supports the Bluetooth A2DP protocol, has an effective communication distance of 10 meters, and a baud rate of 115200bps.

四旋翼飞行器搭载了开源飞控核心模块、卫星导航、微惯导组合定位模块、第一视角(FPV)摄像头、远距离遥控接收机数传图传等模块。其配套的地面站用于完成飞行器与地面之间的通信工作。The quadrotor aircraft is equipped with an open-source flight control core module, satellite navigation, micro-inertial navigation combined positioning module, first perspective (FPV) camera, long-distance remote control receiver data transmission and image transmission and other modules. Its supporting ground station is used to complete the communication work between the aircraft and the ground.

虚拟现实眼镜用来接收FPV摄像头实时回传的图像,使测试者可体验跟随无人机观测环境的感觉,提高脑电识别的正确率。The virtual reality glasses are used to receive the images returned by the FPV camera in real time, so that the testers can experience the feeling of following the drone to observe the environment, and improve the accuracy of EEG recognition.

系统依次按照以下步骤进行脑电信号的接收与识别以及无人机的远程控制:The system follows the steps below to receive and identify EEG signals and remote control the UAV:

步骤1,连接脑电耳机,读取原始脑电信号并传输到地面站。Step 1, connect the EEG headset, read the original EEG signal and transmit it to the ground station.

原始信号包括了各个频率段的不同特征波,例如α波、β波、θ波等。耳机通过内置的算法拟合出不同频率段的特征,提供各个波段的功率谱密度及眨眼检测等功能。本发明中,通过记录各个脑区不同频率段信号的特征,识别测试者不同的心理指令。表1是脑电波的部分频段划分以及不同类型脑电波所反映出的脑部精神状态。The original signal includes different characteristic waves of various frequency bands, such as α wave, β wave, θ wave, etc. The earphone fits the characteristics of different frequency bands through the built-in algorithm, and provides functions such as power spectral density and blink detection of each band. In the present invention, by recording the characteristics of signals of different frequency bands in each brain region, different psychological instructions of testers are identified. Table 1 is the division of some frequency bands of brain waves and the mental state of the brain reflected by different types of brain waves.

表1脑电波部分频段划分及其脑部精神状态Table 1 Division of some frequency bands of brain waves and their mental states

步骤2,连接VR眼镜,启动四旋翼飞行器到预设高度悬停。放置在飞行器前方的摄像头将拍摄的影像实时回传,使测试者可体验随无人机观测环境的感觉。Step 2, connect the VR glasses, start the quadcopter to hover at the preset altitude. The camera placed in front of the aircraft transmits the captured images back in real time, so that the testers can experience the feeling of observing the environment with the drone.

在脑电信号的训练和识别中,临场感对脑电测试起重要作用,实验表明临场脑电识别正确率高于比冥想脑电识别。In the training and recognition of EEG signals, the sense of presence plays an important role in the EEG test. Experiments show that the accuracy rate of EEG recognition on the spot is higher than that of meditation EEG recognition.

步骤3,测试者佩戴好VR眼镜,根据飞行器传回的图像想象飞行器向特定方向飞行;同时地面站电脑通过蓝牙接收来自脑电耳机的数据,并利用离散短时傅里叶变换进行特征提取,同时从原始脑电波信号中分离眼动和肌肉抖动等干扰信号,得到本发明需要的各个脑电波频段波形:Step 3, the tester puts on the VR glasses and imagines the aircraft flying in a specific direction according to the image sent back by the aircraft; at the same time, the ground station computer receives the data from the EEG headset through Bluetooth, and uses the discrete short-time Fourier transform to perform feature extraction. Simultaneously separate interfering signals such as eye movement and muscle jitter from the original brain wave signal, obtain each brain wave frequency band waveform needed by the present invention:

脑电信号属于非平稳信号,一般频率在0.5-100Hz,本发明中有效频率是4-14Hz,需要利用离散短时傅里叶变换将时域转换到频域,公式如下:EEG signals belong to non-stationary signals, and the general frequency is 0.5-100Hz. In the present invention, the effective frequency is 4-14Hz. It is necessary to use discrete short-time Fourier transform to convert the time domain to the frequency domain. The formula is as follows:

SSTTFfTT{{xx&lsqb;&lsqb;nno&rsqb;&rsqb;}}((mm,,nno))==Xx((wwkk))==&Sigma;&Sigma;nno==00RR++11{{xx&lsqb;&lsqb;nno&rsqb;&rsqb;ww((nno--mm))}}&CenterDot;&Center Dot;ee--jwjwkknno

x[n]是输入的离散信号,即原始脑电信号;x[n] is the input discrete signal, that is, the original EEG signal;

X(wk)是x[n]w(n-m)的短时傅里叶变化结果。X(wk ) is the short-time Fourier transform result of x[n]w(nm).

R表示窗口长度;R represents the window length;

wk是固定的中心频率;wk is a fixed center frequency;

w[n]表示窗函数。w[n] represents the window function.

当频率固定时,X(wk)可以看做是信号经过一个中心频率是wk的带通滤波器产生的输出。因为此处选择海明窗作为窗函数,具有低通频率响应的特点,而指数对x(n)(此处代表原始脑电信号)有调制作用,可使频谱产生位移,即将x(n)频谱中对应于wk的分量平移到零频,起到了带通滤波器的作用。When the frequency is fixed, X(wk ) can be regarded as the output of the signal passing through a band-pass filter whose center frequency is wk . Because the Hamming window is selected as the window function here, it has the characteristics of low-pass frequency response, and the exponential It has a modulation effect on x(n) (here represents the original EEG signal), which can shift the spectrum, that is, the component corresponding to wk in the x(n) spectrum is translated to zero frequency, which acts as a band-pass filter .

相较而言,肌肉和眨眼伪迹频率要高很多,眼动伪迹的频率则较低。利用脑电信号和干扰伪迹信号频率特征差异,将原始的脑电波信号通过离散短时傅里叶变换的方法,将wk分别设为各个频段的中间频率,则可分离出有效频段,与此同时,去除脑电波信号中来自眼动和肌肉抖动等伪迹的干扰。最后经过短时傅里叶反变换得到各个频段随时间实时变化的值。并将结果存入数据库。In comparison, the frequency of muscle and blink artifacts is much higher, and the frequency of eye movement artifacts is lower. Using the difference in frequency characteristics between the EEG signal and the interference artifact signal, the original EEG signal is transformed by discrete short-time Fourier transform, andwk is set as the middle frequency of each frequency band, and the effective frequency band can be separated. At the same time, interference from artifacts such as eye movements and muscle jitters in the EEG signal is removed. Finally, the value of each frequency band changing with time in real time is obtained through inverse short-time Fourier transform. and store the results in the database.

步骤4,使用Bagging算法生成集成神经网络中的个体网络,使用BP神经网络作为分类模型对个体网络中的样本进行离线学习;通过建立几何模型,计算所有个体网络的决策重心,得到集成网络结果;Step 4, use the Bagging algorithm to generate individual networks in the integrated neural network, and use the BP neural network as a classification model to conduct offline learning on the samples in the individual networks; by establishing a geometric model, calculate the decision center of gravity of all individual networks, and obtain the integrated network results;

神经网络被认为是一种较好的非线性分类方法,尤其是BP神经网络。BP神经网络结构简单,非线性处理能力却很强。然而,在使用BP神经网络时也存在一些困难,如隐单元数目难以确定、网络的最终权值受初始值影响大、易陷入局部最优、对训练样本的数量与质量要求较高等,这些因素影响了网络的泛化能力,使得运动想像的分类效果不太理想。Neural network is considered to be a better nonlinear classification method, especially BP neural network. The structure of BP neural network is simple, but its nonlinear processing ability is very strong. However, there are also some difficulties when using BP neural network, such as the difficulty in determining the number of hidden units, the final weight of the network is greatly affected by the initial value, easy to fall into local optimum, and high requirements for the number and quality of training samples, etc. These factors It affects the generalization ability of the network, making the classification effect of motor imagery not ideal.

神经网络集成通过训练多个神经网络并将其结果进行合成,可以显著地提高神经网络系统的泛化能力。该项研究不仅有助于对机器学习和神经网络的深入研究,还有利于利用神经网络技术来解决现实世界中的实际应用问题。Neural network integration can significantly improve the generalization ability of neural network systems by training multiple neural networks and synthesizing their results. This research not only contributes to the in-depth study of machine learning and neural networks, but also facilitates the use of neural network technology to solve practical application problems in the real world.

本发明使用Bagging算法来生成集成中的子网络,使用BP神经网络作为分类器,最后通过建立数学模型,计算决策重心的方法获得集成分类器最终的输出。The invention uses the Bagging algorithm to generate the subnetwork in the integration, uses the BP neural network as the classifier, and finally obtains the final output of the integrated classifier by establishing a mathematical model and calculating the center of gravity of the decision.

在训练时,使用者应佩戴好VR眼镜和蓝牙耳机并保持专注,根据眼镜中的实时景象想象预定的指令,地面站电脑对回传的数据进行学习并存入训练数据库。一般一个心理指令需要多次训练,每次训练持续一段时间。一个心理指令训练成功后方可进行下一个指令的训练,有用的心理指令包括上、下、东、南、西、北。During training, users should wear VR glasses and Bluetooth headsets and keep focused. According to the real-time scene in the glasses, imagine the predetermined instructions, and the ground station computer will learn the returned data and store them in the training database. Generally, a mental instruction requires multiple training sessions, and each session lasts for a period of time. The training of the next command can only be carried out after a mental command is successfully trained. Useful mental commands include up, down, east, south, west, and north.

步骤5,联合耳机信号质量对识别结果进行筛选,获取稳定的识别结果,并通过数传电台发送。Step 5: Screen the recognition results in combination with the earphone signal quality, obtain stable recognition results, and send them through the digital transmission station.

对于步骤4中集成网络的输出结果,保存一段时间内结果的序列,通过使用信号质量加权的方法,再次对其进行合成,获取较为稳定的结果。方法如下:For the output result of the integrated network in step 4, save the sequence of results for a period of time, and synthesize them again by using the method of signal quality weighting to obtain a relatively stable result. Methods as below:

每隔固定时间T采集一次耳机的信号质量,并返回一次识别结果.设定滑动窗口大小为N,保存包括当前数据的前N次数据。Collect the signal quality of the earphone every fixed time T, and return a recognition result. Set the size of the sliding window to N, and save the previous N data including the current data.

将识别的动作类型定义为二维向量Tn(x,y),定义中性状态为(0,0),向前为(0,1),向后为(0,-1),向左为(-1,0),向右为(1,0)。同时记录动作强度为Pn。耳机采集到的信号质量包括无线信号质量Sn(Sn=0,1,2)和5个电极的信号质量。对5个电极的信号质量求算数平均,得到Qn(0≤Qn≤4).数值越大代表信号质量越好。Define the recognized action type as a two-dimensional vector Tn (x,y), define the neutral state as (0,0), forward as (0,1), backward as (0,-1), and left is (-1,0), and to the right is (1,0). At the same time, record the action intensity as Pn . The signal quality collected by the earphone includes the wireless signal quality Sn (Sn =0,1,2) and the signal quality of the five electrodes. Calculate the arithmetic average of the signal quality of the five electrodes to obtain Qn (0≤Qn ≤4). The larger the value, the better the signal quality.

联合耳机的信号质量,对识别结果的筛选方法如下:Combined with the signal quality of the headset, the screening method for the recognition results is as follows:

(1)Sn<2时:说明无线信号质量较差,往往是由于耳机的佩戴者离接收器太远导致(1) When Sn < 2: It means that the wireless signal quality is poor, often because the wearer of the headset is too far away from the receiver.

的,系统发出警告,并将识别结果重置为中性,强度为0;Yes, the system issues a warning and resets the recognition result to neutral and the intensity to 0;

(2)Sn=2时:计算电极在一定时间内的平均信号强度(2) When Sn = 2: Calculate the average signal strength of the electrode within a certain period of time

QQ&OverBar;&OverBar;==11NN&Sigma;&Sigma;ii==nno--NN++11nnoQQnno

设定强度警戒值Qt,若则认为电极信号质量差,系统提示使用者重新佩戴耳机,Set the intensity warning value Qt , if If the electrode signal quality is considered poor, the system prompts the user to wear the earphone again.

并将识别结果重置为中性,强度为0;若计算有效识别结果如下And reset the recognition result to neutral, the intensity is 0; if Calculate the effective recognition results as follows

TT((xxtt,,ythe ytt))==1144NN&Sigma;&Sigma;ii==nno--NN++11nnoQQnno&CenterDot;&Center Dot;PPnno&CenterDot;&CenterDot;TTnnoPP==||||TT((xxtt,,ythe ytt))||||==xxtt22++ythe ytt22

选定识别范围角计算Selected recognition range angle calculate

a)若0≤θt<θ或2π-θ≤θt<2π,识别结果为向右,强度P=T·x0a) If 0≤θt <θ or 2π-θ≤θt <2π, the recognition result is to the right, and the intensity P=T x0 ;

b)若识别结果为向前,强度P=T·y0b) if The recognition result is forward, the intensity P=T·y0 ;

c)若π-θ≤θt<π+θ,识别结果为向左,强度P=T·x0c) If π-θ≤θt <π+θ, the recognition result is to the left, and the intensity P=T x0 ;

d)若识别结果为向后,强度P=T·y0d) if The recognition result is backward, and the intensity P=T·y0 ;

其中,x0、y0分别为x轴和y轴的单位向量。Wherein, x0 and y0 are unit vectors of the x-axis and y-axis respectively.

设定强度阈值S,舍去强度P<S的识别结果。Set the intensity threshold S, and discard the recognition results with intensity P<S.

步骤6,将识别结果通过地面站发送给多旋翼无人机飞控。Step 6. Send the recognition result to the flight controller of the multi-rotor UAV through the ground station.

地面站使用了Paparazzi系统与无人机通信。Paparazzi是一个开源的无人机硬件和软件系统,它包括了自动驾驶系统和地面站软件。实际控制流程中,PC将脑电识别结果转化为控制指令,并写入数传电台,最终通过PPRZLINK协议发送给无人机。表二是PPRZLINK的标准报文格式。The ground station used the Paparazzi system to communicate with the drone. Paparazzi is an open source UAV hardware and software system, which includes the autopilot system and ground station software. In the actual control process, the PC converts the EEG recognition results into control instructions, writes them into the digital transmission station, and finally sends them to the UAV through the PPRZLINK protocol. Table 2 is the standard message format of PPRZLINK.

表2 PPRZ标准报文格式Table 2 PPRZ standard message format

步骤7,四旋翼飞行器的飞控系统接收地面站发送的数据并进行解析,控制无人机飞行。Step 7. The flight control system of the quadrotor aircraft receives and analyzes the data sent by the ground station to control the flight of the UAV.

实施例:Example:

本发明采用Emotiv Insight脑电耳机和Paparazzi开源飞控组成意念控制的无人机系统。VR眼镜用来给使用者提供飞行器所处的模拟环境,提高脑电识别率。基于脑电耳机的无人机意念控制系统的整体框架图如附图1.其包括四个部分,分别是脑电感知模块,深度学习模块,地面站控制系统接口和无人机飞控与动力系统。脑电耳机的信号采集模型如附图2。四旋翼飞行器的硬件结构框图如附图3。使用者头戴Emotiv Insight耳机和VR眼镜,耳机通过其贴在前额和耳部的干电极实时获取使用者的脑电波电压值,通过内置算法,将电压值转换成反映脑电波参数特征的数字信号,通过蓝牙发送至地面站,地面站对数据进行训练与识别,将最终的结果发送给无人机飞控,从而控制飞行器的飞行。The present invention adopts Emotiv Insight EEG earphone and Paparazzi open-source flight controller to form a mind-controlled UAV system. VR glasses are used to provide users with a simulated environment where the aircraft is located and improve the recognition rate of EEG. The overall frame diagram of the UAV mind control system based on the EEG headset is shown in Figure 1. It includes four parts, namely the EEG perception module, the deep learning module, the ground station control system interface, and the UAV flight control and power system. The signal acquisition model of the EEG headset is shown in Figure 2. The block diagram of the hardware structure of the quadrotor aircraft is shown in Figure 3. The user wears the Emotiv Insight headset and VR glasses. The headset obtains the user's brain wave voltage value in real time through the dry electrodes attached to the forehead and ears, and converts the voltage value into a digital signal reflecting the characteristics of the brain wave parameters through the built-in algorithm. , sent to the ground station via Bluetooth, the ground station trains and recognizes the data, and sends the final result to the UAV flight controller to control the flight of the aircraft.

整体接口设计,见附图4。For the overall interface design, see Figure 4.

步骤一,使用者正确佩戴脑电耳机与VR眼镜,将耳机前额传感器贴于右前额眉骨之上5厘米左右的位置,将参考电极紧贴耳根后部,确保传感器与参考电极和皮肤完全接触。并启动蓝牙连接配对。Step 1: The user wears the EEG headset and VR glasses correctly, attaches the forehead sensor of the headset to the position about 5 cm above the brow bone on the right forehead, and puts the reference electrode close to the back of the ear to ensure that the sensor is in full contact with the reference electrode and the skin . And start Bluetooth connection pairing.

步骤二,脑电耳机通过内置的算法将采集到的各个脑区原始脑电波电压U转换成原始数字脑电信号,通过蓝牙传至地面站。Step 2: The EEG headset converts the collected original EEG voltage U of each brain region into an original digital EEG signal through a built-in algorithm, and transmits it to the ground station through Bluetooth.

步骤三,地面站蓝牙接收来自脑电耳机送来的数据,将原始数据从时域变换到频域,进行特征提取,将脑电中theta,alpha,beta,delta四种脑电信号提取出来,同时将原始脑电信号中高频眨眼肌肉抖动,低频眼动等伪迹去除。离散短时傅里叶变换公式如下:Step 3, the ground station Bluetooth receives the data sent by the EEG headset, transforms the original data from the time domain to the frequency domain, and performs feature extraction to extract the four EEG signals of theta, alpha, beta, and delta in the EEG. At the same time, artifacts such as high-frequency blinking muscle jitter and low-frequency eye movement in the original EEG signal are removed. The discrete short-time Fourier transform formula is as follows:

SSTTFfTT{{xx&lsqb;&lsqb;nno&rsqb;&rsqb;}}((mm,,nno))==Xx((wwkk))==&Sigma;&Sigma;nno==00RR++11{{xx&lsqb;&lsqb;nno&rsqb;&rsqb;ww((nno--mm))}}&CenterDot;&Center Dot;ee--jwjwkknno

x[n]是输入的离散信号,即原始脑电信号;x[n] is the input discrete signal, that is, the original EEG signal;

X(wk)是x[n]w(n-m)的短时傅里叶变化结果。X(wk ) is the short-time Fourier transform result of x[n]w(nm).

R表示窗口长度;R represents the window length;

wk是固定的中心频率;wk is a fixed center frequency;

w[n]表示窗函数,w[n] represents the window function,

此发明中用到海明窗,可以抵消高频信号的干扰,海明窗函数的时域和频域函数图如图5所示,函数表达式如下:The Hamming window is used in this invention, which can offset the interference of high-frequency signals. The time domain and frequency domain function diagrams of the Hamming window function are shown in Figure 5, and the function expression is as follows:

ww((nno))==0.538360.53836--0.461640.46164ccoosthe s((22&pi;&pi;nnoNN--11))

N为采样次数,即窗口长度。N is the number of samples, that is, the window length.

将上面两式合并,即可得到窗函数为海明窗的离散短时傅里叶变换公式:Combining the above two formulas, the discrete short-time Fourier transform formula with the window function as the Hamming window can be obtained:

SSTTFfTT{{xx&lsqb;&lsqb;nno&rsqb;&rsqb;}}((mm,,nno))==Xx((wwkk))==&Sigma;&Sigma;nno==00RR++11xx&lsqb;&lsqb;nno&rsqb;&rsqb;..((0.538360.53836--0.461640.46164ccoosthe s((22&pi;&pi;((nno--mm))RR--11))))&CenterDot;&CenterDot;ee--jwjwkknno

将窗口长度R设为2s,每次采样1024个点。根据delta:1-3Hz,theta:4-7Hz,alpha:8-14Hz,beta:14-30Hz各自的频率段,将固定中心频率wk分别设为w1=2Hz,w2=5.5Hz,w3=11Hz,w4=22Hz,带入上面的变换公式即可频域中提取分离得到delta,theta,alpha,beta各自的频率谱分别表示为Xd(w1),Xt(w2),Xa(w3),Xb(w4),利用短时傅里叶反变换,公式如下:Set the window length R as 2s, and sample 1024 points each time. According to the respective frequency bands of delta: 1-3Hz, theta: 4-7Hz, alpha: 8-14Hz, beta: 14-30Hz, set the fixed center frequency wk to w1 = 2Hz, w2 = 5.5Hz, w3 =11Hz, w4 =22Hz, put it into the above transformation formula to extract and separate in the frequency domain to obtain delta, theta, alpha, and beta’s respective frequency spectrum respectively expressed as Xd (w1 ), Xt (w2 ) , Xa (w3 ), Xb (w4 ), using inverse short-time Fourier transform, the formula is as follows:

DD.((nno))==11LL&Sigma;&Sigma;mm&Sigma;&Sigma;nnoLL--11Xxdd((ww11))eejj22&pi;&pi;LLnno

TT((nno))==11LL&Sigma;&Sigma;mm&Sigma;&Sigma;nnoLL--11Xxtt((ww22))eejj22&pi;&pi;LLnno

AA((nno))==11LL&Sigma;&Sigma;mm&Sigma;&Sigma;nnoLL--11Xxaa((ww33))eejj22&pi;&pi;LLnno

BB((nno))==11LL&Sigma;&Sigma;mm&Sigma;&Sigma;nnoLL--11Xxbb((ww44))eejj22&pi;&pi;LLnno

L为频率采样点数,因为窗口长度为2s,结合本硬件的采样频率,L=1024。L is the number of frequency sampling points, because the window length is 2s, combined with the sampling frequency of this hardware, L=1024.

即可得到时域中theta,low alpha,high alpha的实时变化值D,T,A,B。将其存入数据库,作为一次样本。记录飞行器在悬停、前进、后退、向左飞行、向右飞行不同状态下的脑电数据,得到样本集S={xi|i=1,2,3…N},其中xi为单个训练样本,包括D(n),T(n),A(n),B(n)以及对应的理想输出结果,N为训练样本个数。The real-time change values D, T, A, and B of theta, low alpha, and high alpha in the time domain can be obtained. Store it in the database as a sample. Record the EEG data of the aircraft in different states of hovering, forward, backward, left and right, and obtain a sample set S={xi |i=1,2,3...N}, wherexi is a single Training samples, including D(n), T(n), A(n), B(n) and the corresponding ideal output results, N is the number of training samples.

步骤四,使用Bagging算法生成集成神经网络中的个体网络,使用BP神经网络作为分类模型对个体网络中的样本进行离线学习;Step 4, using the Bagging algorithm to generate individual networks in the integrated neural network, and using the BP neural network as a classification model to perform offline learning on the samples in the individual networks;

本发明使用基于Bagging的神经网络集成结构如图6所示。其具体实现方法为:The present invention uses a neural network integration structure based on Bagging as shown in FIG. 6 . Its specific implementation method is:

在原始样本集中通过bootstrap技术随机抽取样本构成M个子训练集{Sm|m=1,2,3…M},子集的训练规模通常与原始训练集相当,样本允许重复选择。In the original sample set, samples are randomly selected by bootstrap technology to form M sub-training sets {Sm |m=1, 2, 3...M}. The training scale of the sub-sets is usually equivalent to the original training set, and samples are allowed to be repeatedly selected.

使用BP神经网络对各个子训练集进行训练。设子训练集Sm的理想输出为O={Om}1≤m≤M,定义误差函数Use BP neural network to train each sub-training set. Let the ideal output of sub-training set Sm be O={Om }1≤m≤M , define the error function

EE.((WW,,ww))==1122||||Oo--&zeta;&zeta;||||22==1122&Sigma;&Sigma;mm==11Mm&lsqb;&lsqb;Oomm--gg((&Sigma;&Sigma;pp==11PPWWmmppgg((&Sigma;&Sigma;nno==11NNwwppnno&xi;&xi;nno))))&rsqb;&rsqb;22

其中,W={Wmp}1≤m≤M,1≤p≤P和w={wpn}1≤p≤P,1≤n≤N分别为输出层与隐层之间的权矩阵和隐层和输入层之间的权矩阵,ξ=(ξ1,…,ξn)T∈Rn为输入样本,Among them, W={Wmp }1≤m≤M, 1≤p≤P and w={wpn }1≤p≤P, 1≤n≤N are the weight matrices between the output layer and the hidden layer and The weight matrix between the hidden layer and the input layer, ξ=(ξ1 ,…,ξn )T ∈ Rn is the input sample,

&zeta;&zeta;==gg((WWmm&CenterDot;&Center Dot;&tau;&tau;))==gg((&Sigma;&Sigma;pp==11PPWWmmpp&tau;&tau;pp)),,mm==11,,22,,......,,Mm

为网络实际输出,is the actual output of the network,

&tau;&tau;pp==gg((wwpp&CenterDot;&CenterDot;&xi;&xi;))==gg((&Sigma;&Sigma;nno==11NNwwppnno&xi;&xi;nno)),,pp==11,,......,,PP

为网络的隐层输出。is the output of the hidden layer of the network.

对当前权值Wk和wk定义权值的增量为For the current weights Wk and wk , define the weight increment as

WWmmppkk==--&eta;&eta;&part;&part;EE.WWmmpp++&alpha;W&alpha;Wmmppkk--11,,pp==11,,......,,PP;;mm==11,,22,,......,,Mm;;kk&GreaterEqual;&Greater Equal;11

wwppnnokk==--&eta;&eta;&part;&part;EE.wwppnno++&alpha;w&alpha;wppnnokk--11,,pp==11,,......,,PP;;nno==11,,22,,......,,NN;;kk&GreaterEqual;&Greater Equal;11

其中α为动量项因子。where α is the momentum term factor.

训练的程序框图如附图7。训练的大致流程如下:The program block diagram of the training is shown in Figure 7. The general process of training is as follows:

在开始训练之前,需要先设定训练的动作类型,如果没有指定动作类型,则默认训练中性状态(COG_NEUTRAL);使用者需要在发送开始训练的指令之前想象指定的动作,在发送指令之后,会有一段时间的延时来避免从中性到指定状态过渡的干扰。紧接着是8秒钟的数据采集阶段,数据采集结束后,会根据采集信号的质量提示采集成功或者失败。如果采集失败,则重新进行采集;如果采集成功,则使用者还可以选择是否接受此次采集结果,以避免使用者在此次采集过程中无法保持注意力集中的情况。最后,若使用者选择接受,则该用户的资料被更新。在放松的状态(中性状态)训练成功后,以同样的方法训练前、后、左、右四种状态。Before starting training, you need to set the type of training action. If no action type is specified, the default training neutral state (COG_NEUTRAL); users need to imagine the specified action before sending the command to start training. After sending the command, There is a time delay to avoid disturbing the transition from neutral to assigned state. Then there is an 8-second data collection stage. After the data collection is over, it will prompt whether the collection is successful or failed according to the quality of the collected signal. If the collection fails, the collection is performed again; if the collection is successful, the user can also choose whether to accept the collection result, so as to avoid the situation that the user cannot maintain concentration during the collection process. Finally, if the user chooses to accept, the user's information will be updated. After successful training in a relaxed state (neutral state), train the four states of front, back, left and right in the same way.

步骤五,使用者发出心理指令(中性、上、下、左、右),地面站对脑电信号的特征进行识别。各个子网络的输出通过建立几何模型并计算决策重心的方法进行集成。并将集成后的结果联合耳机信号的质量进行筛选,获得较为稳定的结果,最终将结果通过PPRZLINK协议发送给飞行器控制系统。其具体方法为:Step 5, the user issues a mental command (neutral, up, down, left, right), and the ground station recognizes the characteristics of the EEG signal. The output of each sub-network is integrated by building a geometric model and calculating the decision center of gravity. And the integrated results are screened with the quality of the earphone signal to obtain a relatively stable result, and finally the result is sent to the aircraft control system through the PPRZLINK protocol. The specific method is:

假定各个分类器的预测值为hi∈{0,1,2,3,4},分别代表识别动作类型为中性、前、后、左、右。将预测值映射为二维向量ti(x,y),定义中性状态为(0,0),向上为(0,1),向下为(0,-1),向左为(-1,0),向右为(1,0)。计算Assume that the prediction value of each classifier is hi ∈ {0,1,2,3,4}, which respectively represent the recognition action types as neutral, front, back, left, and right. Map the predicted value to a two-dimensional vector ti (x,y), define the neutral state as (0,0), up as (0,1), down as (0,-1), and left as (- 1,0), to the right is (1,0). calculate

tt((xxtt,,ythe ytt))==11Mm&Sigma;&Sigma;ii==11Mmttnnopp==||||tt((xxtt,,ythe ytt))||||==xxtt22++ythe ytt22

其中,M为子分类器的数量。若p的值小于预先设定的阈值,则认为集成分类器的输出为中性状态;若p的值大于预先设定的阈值,则根据t(xt,yt)在平面中与四个坐标轴的位置关系来确定集成分类器的输出。where M is the number of sub-classifiers. If the value of p is less than the preset threshold, the output of the integrated classifier is considered to be neutral; if the value of p is greater than the preset threshold, according to t(xt , yt ) in the plane with The positional relationship of the axes to determine the output of the ensemble classifier.

为去除干扰和防止偶然因素,避免对飞行器过于频繁的控制,需要联合耳机信号的质量对识别结果进行筛选,获取较为稳定的识别结果,方法为:In order to remove interference, prevent accidental factors, and avoid too frequent control of the aircraft, it is necessary to combine the quality of the earphone signal to screen the recognition results to obtain a relatively stable recognition result. The method is as follows:

每隔固定时间T采集一次耳机的信号质量,并返回一次识别结果.设定滑动窗口大小为N,保存包括当前数据的前N次数据。Collect the signal quality of the earphone every fixed time T, and return a recognition result. Set the size of the sliding window to N, and save the previous N data including the current data.

将集成分类器的输出再次映射为二维向量Tn(x,y),定义中性状态为(0,0),向前为(0,1),向后为(0,-1),向左为(-1,0),向右为(1,0)。同时记录动作强度为Pn。耳机采集到的信号质量包括无线信号质量Sn(Sn=0,1,2)和5个电极的信号质量。对5个电极的信号质量求算数平均,得到Qn(0≤Qn≤4).数值越大代表信号质量越好。Map the output of the integrated classifier to a two-dimensional vector Tn (x,y) again, define the neutral state as (0,0), forward as (0,1), and backward as (0,-1), Left is (-1,0), right is (1,0). At the same time, record the action intensity as Pn . The signal quality collected by the earphone includes the wireless signal quality Sn (Sn =0,1,2) and the signal quality of the five electrodes. Calculate the arithmetic average of the signal quality of the five electrodes to obtain Qn (0≤Qn ≤4). The larger the value, the better the signal quality.

联合耳机的信号质量,对识别结果的筛选方法如下:Combined with the signal quality of the headset, the screening method for the recognition results is as follows:

(1)每隔1s查询一次识别结果和耳机信号质量。设定窗口大小为7,保存7次以内的识别结果。(1) Query the recognition result and headphone signal quality every 1s. Set the window size to 7, and save the recognition results within 7 times.

(2)判断Sn的大小。若Sn<2,系统发出警告,并将识别结果重置为中性,强度为0;(2) Judge the size of Sn . If Sn <2, the system will issue a warning and reset the recognition result to neutral and the intensity to 0;

(3)计算电极在一定时间内的平均信号强度(3) Calculate the average signal strength of the electrode within a certain period of time

QQ&OverBar;&OverBar;==1177&Sigma;&Sigma;ii==nno--66nnoQQnno

设定强度警戒值Qt=1.5,若则认为电极信号质量差,系统提示使用者重新佩戴耳机,并将识别结果重置为中性,强度为0;Set the intensity warning value Qt =1.5, if If the signal quality of the electrode is considered to be poor, the system prompts the user to wear the earphone again, and resets the recognition result to neutral and the strength to 0;

(4)计算有效识别结果如下(4) Calculate the effective recognition results as follows

TT((xxtt,,ythe ytt))==112828&Sigma;&Sigma;ii==nno--66nnoQQnno&CenterDot;&Center Dot;PPnno&CenterDot;&Center Dot;TTnnoPP==||||TT((xxtt,,ythe ytt))||||==xxtt22++ythe ytt22

T(xt,yt)和P为等效动作类型和等效动作强度。计算向量角T(xt , yt ) and P are equivalent action type and equivalent action intensity. Calculate Vector Angle

a)若识别结果为向右,强度P=T·x0a) if or The recognition result is to the right, the intensity P=T·x0 ;

b)若识别结果为向前,强度P=T·y0b) if The recognition result is forward, the intensity P=T·y0 ;

c)若识别结果为向左,强度P=T·x0c) if The recognition result is to the left, the intensity P=T·x0 ;

d)若识别结果为向后,强度P=T·y0d) if The recognition result is backward, and the intensity P=T·y0 ;

其中,x0、y0分别为x轴和y轴的单位向量。Wherein, x0 and y0 are unit vectors of the x-axis and y-axis respectively.

(5)设定强度阈值S=0.2,若P<S,舍去当前识别结果,保留上一次识别结果。(5) Set the intensity threshold S=0.2, if P<S, discard the current recognition result and keep the last recognition result.

将第(5)步得到的稳定的识别结果的动作类型和强度写入PPRZLINK报文的数据域,添加报头和校验和,写入数传电台发送。Write the action type and strength of the stable recognition result obtained in step (5) into the data field of the PPRZLINK message, add a header and a checksum, and write it into the data transmission station for transmission.

步骤六,无人机飞控对数据报进行解析,获取数据域中的动作类型和强度,将动作类型的中性、前、后、左、右分别定义为飞行器的悬停、向南飞行、向北飞行、向东飞行和向西飞行,控制无人机。Step 6: UAV flight controller parses the data report, obtains the action type and intensity in the data field, and defines the action types of neutral, forward, backward, left, and right as hovering, southward flight, and Fly north, east and west to control the drone.

本发明的效果是使用者通过想象某种动作,能够控制四旋翼飞行器做出相应的动作,实现了“意念控制无人机”的构想。随着技术的完善与进步,本发明将会有广泛而丰富的应用前景。The effect of the present invention is that the user can control the quadrotor aircraft to make a corresponding action by imagining a certain action, realizing the idea of "mind control drone". With the perfection and progress of the technology, the present invention will have extensive and abundant application prospects.

以上所述仅是本发明的优选实施方式,应当指出:对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above is only a preferred embodiment of the present invention, it should be pointed out that for those of ordinary skill in the art, without departing from the principle of the present invention, some improvements and modifications can also be made, and these improvements and modifications are also possible. It should be regarded as the protection scope of the present invention.

Claims (10)

Translated fromChinese
1.基于脑电机器学习的无人机意念遥操作系统,其特征在于,包括:1. The teleoperation system based on brain motor machine learning for unmanned aerial vehicles is characterized in that it includes:一脑电感知模块,包括图像采集装置以及脑电测量设备,所述图像采集装置用于接收无人机的飞行状态及周围的环境信息,所述脑电测量设备用于获取无人机操作者脑部激发的大脑电流脉冲信号;An EEG perception module, including an image acquisition device and an EEG measurement device, the image acquisition device is used to receive the flight state of the drone and the surrounding environment information, and the EEG measurement device is used to obtain the drone operator's Cerebral electrical pulse signals stimulated by the brain;一信号处理模块,从所述脑电感知模块获取的大脑电流脉冲信号中分离出四种需要的脑电信号δ波、θ波、α波、β波;A signal processing module, which separates four kinds of required EEG signal delta waves, theta waves, alpha waves, and beta waves from the brain current pulse signals obtained by the EEG perception module;一深度学习模块,将分离出四种脑电信号δ波、θ波、α波、β波作为输入进行识别并输出无人机操作指令;A deep learning module that separates four types of EEG signals, δ wave, θ wave, α wave, and β wave, as input for identification and output of drone operation instructions;一地面站控制模块,根据所述深度学习模块输出的无人机操作指令无人机进行操作。A ground station control module, which instructs the UAV to operate according to the UAV operation command output by the deep learning module.2.根据权利要求1所述的无人机意念遥操作系统,其特征在于,所述深度学习模块通过基于BP神经网络模型的线上学习,识别向前、向后、向左、向右的四种旋翼无人机飞行模式,并用脑电信号强弱来控制油门大小使得飞行器上升或下降;联合信号质量,对识别结果进行筛选,获取最终操作指令。2. The teleoperation system for unmanned aerial vehicles according to claim 1, characterized in that, the deep learning module recognizes forward, backward, left and right directions through online learning based on BP neural network model. Four rotor UAV flight modes, and use the EEG signal to control the throttle to make the aircraft rise or fall; combined with the signal quality, the recognition results are screened to obtain the final operating instructions.3.根据权利要求2所述的无人机意念遥操作系统,其特征在于,基于BP神经网络模型的线上学习方法,使用bagging算法生成个体网络,将BP神经网络作为分类模型对样本进行离线学习;对于所有个体网络的输出,通过建立几何模型,计算决策重心的方法进行集成;最后将集成后的结果通过联合脑电信号进行筛选,获得可以用于控制无人机的较为稳定的控制指令。3. The teleoperation system for unmanned aerial vehicles according to claim 2, characterized in that, based on the online learning method of the BP neural network model, the bagging algorithm is used to generate an individual network, and the BP neural network is used as a classification model to sample offline Learning; the output of all individual networks is integrated by establishing a geometric model and calculating the center of gravity of the decision-making; finally, the integrated results are screened through the joint EEG signal to obtain a relatively stable control command that can be used to control the drone .4.根据权利要求1、2或3所述的无人机意念遥操作系统,其特征在于,所述图像采集装置为显示器或者VR眼镜,所述脑电测量设备为脑电耳机或者脑电头盔。4. according to claim 1,2 or 3 described UAV teleoperation system, it is characterized in that, described image acquisition device is display or VR glasses, and described EEG measuring equipment is EEG earphone or EEG helmet .5.一种采用权利要求1-4任一所述基于脑电机器学习的无人机意念遥操作系统的无人机意念遥操作方法,其特征在于,包括以下步骤:5. A method for remote operation of the UAV mind based on the brain-electric machine learning of any one of claims 1-4, characterized in that it comprises the following steps:第一步,读取脑电耳机获取的大脑电流脉冲信号并传至信号处理模块;The first step is to read the brain current pulse signal obtained by the EEG headset and transmit it to the signal processing module;第二步,信号处理模块对第一步收到的原始脑电信号通过离散短时傅里叶变换的方法,进行特征提取和去除其中的干扰信号,分离出四种需要的脑电信号δ波、θ波、α波及β波并存入数据库,作为训练样本;In the second step, the signal processing module performs feature extraction and removes the interference signal from the original EEG signal received in the first step through the discrete short-time Fourier transform method, and separates four required EEG signal delta waves , θ wave, α wave and β wave are stored in the database as training samples;第三步,使用Bagging算法生成集成神经网络中的个体网络,使用BP神经网络作为分类模型对个体网络中的样本进行离线学习;The third step is to use the Bagging algorithm to generate the individual network in the integrated neural network, and use the BP neural network as a classification model to perform offline learning on the samples in the individual network;第四步,建立几何模型,计算所有个体网络的决策重心,得到集成网络结果;The fourth step is to establish a geometric model, calculate the decision center of gravity of all individual networks, and obtain the integrated network results;第五步,将集成网络的结果通过联合脑电信号质量的识别筛选算法,获取到可以用于无人机控制的较为稳定的识别结果;The fifth step is to pass the results of the integrated network through the recognition and screening algorithm of the joint EEG signal quality to obtain relatively stable recognition results that can be used for UAV control;第六步,将识别结果发送给地面站控制模块控制无人机。The sixth step is to send the recognition result to the ground station control module to control the UAV.6.根据权利要求5所述的无人机意念遥操作方法,其特征在于,所述步骤六具体包括:6. The method for remote operation of UAV ideas according to claim 5, wherein said step six specifically includes:将识别结果通过地面站发送给多旋翼无人机飞控;Send the recognition result to the flight controller of the multi-rotor UAV through the ground station;多旋翼无人机飞控对地面站发来的数据进行解析,控制多旋翼无人机飞行。The multi-rotor UAV flight controller analyzes the data sent by the ground station and controls the flight of the multi-rotor UAV.7.根据权利要求5所述的无人机意念遥操作方法,其特征在于,所述第二步包括以下过程:7. The teleoperation method for unmanned aerial vehicles according to claim 5, wherein the second step comprises the following processes:利用离散短时傅里叶变换的方法,将脑电信号EEG从时域变换到频域,进行特征提取和去除其中的干扰信号,将脑电中δ波、θ波、α波及β波三种脑电信号提取出来,离散短时傅里叶变换公式如下:Using the method of discrete short-time Fourier transform, the EEG signal EEG is transformed from the time domain to the frequency domain, and the features are extracted and the interference signals are removed. The EEG signal is extracted, and the discrete short-time Fourier transform formula is as follows:SSTTFfTT{{xx&lsqb;&lsqb;nno&rsqb;&rsqb;}}((mm,,nno))==Xx((wwkk))==&Sigma;&Sigma;nno==00RR++11xx&lsqb;&lsqb;nno&rsqb;&rsqb;&CenterDot;&CenterDot;((0.538360.53836--0.461640.46164ccoosthe s((22&pi;&pi;((nno--mm))RR--11))))&CenterDot;&CenterDot;ee--jwjwkknnox[n]是输入的离散信号,即原始脑电信号EEG;X(wk)是短时傅里叶变化结果;R表示窗口长度;wk是固定的中心频率,m、n是自变量,j是虚数单位;x[n] is the input discrete signal, that is, the original EEG signal; X(wk ) is the result of short-time Fourier transformation; R represents the window length; wk is the fixed center frequency, and m and n are the independent variables , j is the imaginary unit;将窗口长度R设为2s,每次采样1024个点;根据delta:1-4Hz,theta:4-7Hz,alpha:8-13Hz,beta:13-30Hz各自的频率段,将固定中心频率wk分别设为w1=2.5Hz,w2=5.5Hz,w3=10.5Hz,w4=21.5Hz带入上面的变换公式,即可频域中提取分离得到δ波、θ波、α波及β波各自的频率谱,分别表示为Xd(w1),Xt(w2),Xa(w3),Xb(w4),利用短时傅里叶反变换,公式如下:Set the window length R to 2s, and sample 1024 points each time; according to the respective frequency segments of delta: 1-4Hz, theta: 4-7Hz, alpha: 8-13Hz, beta: 13-30Hz, the center frequency wk will be fixed Set w1 = 2.5Hz, w2 = 5.5Hz, w3 = 10.5Hz, w4 = 21.5Hz and put them into the above transformation formula, that is, extract and separate in the frequency domain to obtain δ wave, θ wave, α wave and β wave The respective frequency spectrums of the waves are expressed as Xd (w1 ), Xt (w2 ), Xa (w3 ), Xb (w4 ), respectively, using the inverse short-time Fourier transform, the formula is as follows:DD.((nno))==11LL&Sigma;&Sigma;mm&Sigma;&Sigma;nnoLL--11Xxdd((ww11))eejj22&pi;&pi;LLnnoTT((nno))==11LL&Sigma;&Sigma;mm&Sigma;&Sigma;nnoLL--11Xxtt((ww22))eejj22&pi;&pi;LLnnoAA((nno))==11LL&Sigma;&Sigma;mm&Sigma;&Sigma;nnoLL--11Xx&alpha;&alpha;((ww33))eejj22&pi;&pi;LLnnoBB((nno))==11LL&Sigma;&Sigma;mm&Sigma;&Sigma;nnoLL--11Xxbb((ww44))eejj22&pi;&pi;LLnnoL为频率采样点数;L is the number of frequency sampling points;即可得到时域中δ波、θ波、α波及β波的实时变化值D(n),T(n),A(n),B(n);将实时变化值D(n),T(n),A(n),B(n)存入数据库,作为一次样本;通过神经网络模型的训练,得到样本集S={xi|i=1,2,3…N},其中xi为单个训练样本,包括D(n),T(n),A(n),B(n)以及对应的理想输出结果,N为训练样本个数。The real-time change values D(n), T(n), A(n), B(n) of the delta wave, theta wave, alpha wave and beta wave in the time domain can be obtained; the real-time change values D(n), T (n), A(n), B(n) are stored in the database as a sample; through the training of the neural network model, a sample set S={xi |i=1,2,3...N} is obtained, where xi is a single training sample, including D(n), T(n), A(n), B(n) and the corresponding ideal output results, and N is the number of training samples.8.根据权利要求7所述的无人机意念遥操作方法,其特征在于,所述第三步包括以下过程:8. The teleoperation method for unmanned aerial vehicles according to claim 7, wherein the third step comprises the following processes:在原始样本集中通过bootstrap技术随机抽取样本构成M个子训练集{Sm|m=1,2,3…M},子集的训练规模通常与原始训练集相当,样本允许重复选择;In the original sample set, samples are randomly selected by bootstrap technology to form M sub-training sets {Sm |m=1, 2, 3...M}. The training scale of the subset is usually equivalent to the original training set, and samples are allowed to be repeatedly selected;使用BP神经网络对各个子训练集进行训练:设子训练集Sm的理想输出为O={Om}1≤m≤M,定义误差函数Use the BP neural network to train each sub-training set: set the ideal output of the sub-training set Sm as O={Om }1≤m≤M , define the error functionEE.((WW,,ww))====1122&Sigma;&Sigma;mm==11Mm&lsqb;&lsqb;Oomm--gg((&Sigma;&Sigma;pp==11PPWWmmppgg((&Sigma;&Sigma;nno==11NNwwppnno&xi;&xi;nno))))&rsqb;&rsqb;22其中,W={Wmp}1≤m≤M,1≤p≤P和w={wpn}1≤p≤P,1≤n≤N分别为输出层与隐层之间的权矩阵和隐层和输入层之间的权矩阵,ξ=(ξ1,…,ξn)T∈Rn为输入样本,Among them, W={Wmp }1≤m≤M, 1≤p≤P and w={wpn }1≤p≤P, 1≤n≤N are the weight matrix and The weight matrix between the hidden layer and the input layer, ξ=(ξ1 ,…,ξn )T ∈ Rn is the input sample,&zeta;&zeta;====gg((&Sigma;&Sigma;pp==11PPWWmmpp&tau;&tau;pp)),,mm==11,,22,,......,,Mm为网络实际输出;is the actual output of the network;&tau;&tau;pp====gg((&Sigma;&Sigma;nno==11NNwwppnno&xi;&xi;nno)),,pp==11,,......,,PP为网络的隐层输出;is the hidden layer output of the network;对当前权值Wk和wk定义权值的增量为For the current weights Wk and wk , define the weight increment asWWmmppkk==--&eta;&eta;&part;&part;EE.WWmmpp++&alpha;W&alpha;Wmmppkk--11,,pp==11,,......,,PP;;mm==11,,22,,......,,Mm;;kk&GreaterEqual;&Greater Equal;11wwppnnokk==--&eta;&eta;&part;&part;EE.wwppnno++&alpha;w&alpha;wppnnokk--11,,pp==11,,......,,PP;;nno==11,,22,,......,,NN;;kk&GreaterEqual;&Greater Equal;11其中α为动量项因子,η和α为权重系数。Where α is the momentum item factor, η and α are weight coefficients.9.根据权利要求8所述的无人机意念遥操作方法,其特征在于,所述第四步包括以下过程:9. The method for remote operation of unmanned aerial vehicles according to claim 8, wherein the fourth step comprises the following processes:设各个分类器的预测值为hi∈{0,1,2,3,4},分别代表识别动作类型为中性、前、后、左、右,将预测值映射为二维向量ti(x,y),定义中性状态为(0,0),向前为(0,1),向后为(0,-1),向左为(-1,0),向右为(1,0);计算Let the prediction value of each classifier be hi ∈ {0,1,2,3,4}, which respectively represent the recognition action types as neutral, front, back, left, and right, and map the predicted value to a two-dimensional vector ti (x,y), define the neutral state as (0,0), forward as (0,1), backward as (0,-1), leftward as (-1,0), rightward as ( 1,0); Calculatett((xxtt,,ythe ytt))==11Mm&Sigma;&Sigma;ii==11nnottii((xx,,ythe y))pp==||||tt((xxtt,,ythe ytt))||||==xxtt22++ythe ytt22其中,M为子分类器的数量,若p的值小于预先设定的阈值,则认为集成分类器的输出为中性状态;若p的值大于预先设定的阈值,则根据t(xt,yt)在平面中与四个坐标轴的位置关系来确定集成分类器的输出;所述第五步包括以下过程:Among them, M is the number of sub-classifiers, if the value of p is less than the preset threshold, the output of the integrated classifier is considered to be neutral; if the value of p is greater than the preset threshold, according to t(xt , yt ) determine the output of the integrated classifier with the positional relationship of the four coordinate axes in the plane; the fifth step includes the following processes:每隔固定时间T采集一次耳机的信号质量,并返回一次识别结果,设定滑动窗口大小为N,保存包括当前数据的前N次数据;Collect the signal quality of the earphone every fixed time T, and return a recognition result, set the sliding window size to N, and save the previous N data including the current data;将第四步中集成分类器的输出再次映射为二维向量Tn(x,y),定义中性状态为(0,0),向前为(0,1),向后为(0,-1),向左为(-1,0),向右为(1,0);同时记录动作强度为Pn;耳机采集到的信号质量包括无线信号质量Sn(Sn=0,1,2)和5个电极的信号质量;对5个电极的信号质量求算数平均,得到Qn(0≤Qn≤4);Map the output of the integrated classifier in the fourth step to a two-dimensional vector Tn (x,y) again, define the neutral state as (0,0), forward as (0,1), and backward as (0, -1), to the left is (-1,0), to the right is (1,0); at the same time, the action intensity is recorded as Pn ; the signal quality collected by the earphone includes the wireless signal quality Sn (Sn =0,1 ,2) and the signal quality of 5 electrodes; calculate the arithmetic average of the signal quality of 5 electrodes to obtain Qn (0≤Qn ≤4);联合耳机的信号质量,对识别结果的筛选方法如下:Combined with the signal quality of the headset, the screening method for the recognition results is as follows:(1)Sn<2时:将识别结果重置为中性,强度为0;(1) When Sn < 2: reset the recognition result to neutral and the intensity to 0;(2)Sn=2时:计算电极在一定时间内的平均信号强度(2) When Sn = 2: Calculate the average signal strength of the electrode within a certain period of timeQQ&OverBar;&OverBar;==11NN&Sigma;&Sigma;ii==nno--NN++11nnoQQnno设定强度警戒值Qt,若系统提示使用者重新佩戴耳机,并将识别结果重置为中性,强度为0;若计算有效识别结果如下Set the intensity warning value Qt , if The system prompts the user to wear the headset again, and resets the recognition result to neutral and the intensity to 0; if Calculate the effective recognition results as followsTT((xxtt,,ythe ytt))==1144NN&Sigma;&Sigma;ii==nno--NN++11nnoQQnno&CenterDot;&CenterDot;PPnno&CenterDot;&Center Dot;TTnnoPP==||||TT((xxtt,,ythe ytt))||||==xxtt22++ythe ytt22选定识别范围角2θ计算Selected recognition range angle 2θ calculatea)若0≤θt<θ或2π-θ≤θt<2π,识别结果为向右,强度P=T·x0a) If 0≤θt <θ or 2π-θ≤θt <2π, the recognition result is to the right, and the intensity P=T x0 ;b)若识别结果为向前,强度P=T·y0b) if The recognition result is forward, the intensity P=T·y0 ;c)若π-θ≤θt<π+θ,识别结果为向左,强度P=T·x0c) If π-θ≤θt <π+θ, the recognition result is to the left, and the intensity P=T x0 ;d)若识别结果为向后,强度P=T·y0d) if The recognition result is backward, and the intensity P=T·y0 ;其中,x0、y0分别为x轴和y轴的单位向量;Among them, x0 and y0 are the unit vectors of the x-axis and y-axis respectively;最后,设定强度阈值S,舍去强度P<S的识别结果。Finally, the intensity threshold S is set, and the identification results with intensity P<S are discarded.10.根据权利要求9所述的无人机意念遥操作方法,其特征在于,控制系统采用实时控制多旋翼无人机的方式,实时控制的具体方法是:10. The method for remote operation of the UAV mind according to claim 9, wherein the control system adopts the mode of real-time control of the multi-rotor UAV, and the specific method of real-time control is:地面站使用了Paparazzi系统与无人机通信;实际控制流程中,PC终端将脑电识别结果转化为标准数据报文,并写入数传电台,最终通过PPRZLINK协议发送给无人机;无人机飞控接收到PPRZLINK报文后,对其中的数据域进行解析,最终将识别结果转化为控制指令。The ground station uses the Paparazzi system to communicate with the UAV; in the actual control process, the PC terminal converts the EEG recognition result into a standard data message, writes it into the data transmission station, and finally sends it to the UAV through the PPRZLINK protocol; After the aircraft flight controller receives the PPRZLINK message, it analyzes the data field in it, and finally converts the recognition result into a control command.
CN201610824357.0A2016-09-142016-09-14 Multi-rotor UAV teleoperating system and operation method based on Bluetooth EEG headsetActiveCN106292705B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201610824357.0ACN106292705B (en)2016-09-142016-09-14 Multi-rotor UAV teleoperating system and operation method based on Bluetooth EEG headset

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201610824357.0ACN106292705B (en)2016-09-142016-09-14 Multi-rotor UAV teleoperating system and operation method based on Bluetooth EEG headset

Publications (2)

Publication NumberPublication Date
CN106292705Atrue CN106292705A (en)2017-01-04
CN106292705B CN106292705B (en)2019-05-31

Family

ID=57712316

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201610824357.0AActiveCN106292705B (en)2016-09-142016-09-14 Multi-rotor UAV teleoperating system and operation method based on Bluetooth EEG headset

Country Status (1)

CountryLink
CN (1)CN106292705B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106774428A (en)*2017-03-172017-05-31厦门中联智创科技有限公司A kind of brain wave unmanned aerial vehicle (UAV) control method
CN106927029A (en)*2017-03-032017-07-07东华大学A kind of brain control four-axle aircraft induced based on single channel brain wave
CN106933247A (en)*2017-03-302017-07-07歌尔科技有限公司The control method of unmanned plane, apparatus and system
CN106940593A (en)*2017-02-202017-07-11上海大学Emotiv brain control UASs and method based on VC++ and Matlab hybrid programmings
CN107049308A (en)*2017-06-052017-08-18湖北民族学院A kind of idea control system based on deep neural network
CN107065909A (en)*2017-04-182017-08-18南京邮电大学A kind of flight control system based on BCI
CN107168313A (en)*2017-05-172017-09-15北京汽车集团有限公司Control the method and device of vehicle drive
CN108200511A (en)*2018-03-172018-06-22北京工业大学A kind of intelligence meditation speaker based on EEG signals
CN108415568A (en)*2018-02-282018-08-17天津大学The intelligent robot idea control method of complex network is migrated based on mode
CN108762303A (en)*2018-06-072018-11-06重庆邮电大学A kind of portable brain control UAV system and control method based on Mental imagery
CN109491510A (en)*2018-12-172019-03-19深圳市道通智能航空技术有限公司A kind of unmanned aerial vehicle (UAV) control method, apparatus, equipment and storage medium
CN110502103A (en)*2019-05-292019-11-26中国人民解放军军事科学院军事医学研究院 Brain-controlled UAV system and its control method based on brain-computer interface
CN110658851A (en)*2019-08-272020-01-07北京航空航天大学Unmanned aerial vehicle flight path planning system based on electroencephalogram signals
CN110716578A (en)*2019-11-192020-01-21华南理工大学Aircraft control system based on hybrid brain-computer interface and control method thereof
CN110900627A (en)*2019-11-292020-03-24哈尔滨工程大学Shooting robot device based on brain control technology and remote control technology
CN112327915A (en)*2020-11-102021-02-05大连海事大学Idea control method of unmanned aerial vehicle
CN112883914A (en)*2021-03-192021-06-01西安科技大学Mining robot idea perception and decision method combining multiple classifiers
CN114114274A (en)*2021-11-022022-03-01北京理工大学Unmanned aerial vehicle identification method based on brain-like auditory model
CN114504319A (en)*2022-01-302022-05-17天津大学Attention monitoring system based on brain control unmanned aerial vehicle height feedback
CN114586044A (en)*2019-11-082022-06-03索尼集团公司Information processing apparatus, information processing method, and information processing program
CN114740886A (en)*2022-03-292022-07-12中国电子科技集团公司第五十四研究所Brain-like autonomous operation method for patrol of unmanned aerial vehicle
WO2023020380A1 (en)*2021-08-182023-02-23京东方科技集团股份有限公司Processing method and apparatus, control method and apparatus, vr glasses, device, and medium
CN118035638A (en)*2022-11-032024-05-14杨剑Unmanned aerial vehicle detection method, device and equipment based on data enhancement machine learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2001080584A (en)*1999-09-122001-03-27Yoshitaka HiranoAutomatic operating device for airplane to be started by brain waves
CN102274032A (en)*2011-05-102011-12-14北京师范大学Driver fatigue detection system based on electroencephalographic (EEG) signals
CN102715902A (en)*2012-06-152012-10-10天津大学Emotion monitoring method for special people
CN105249961A (en)*2015-11-022016-01-20东南大学Real-time driving fatigue detection system and detection method based on Bluetooth electroencephalogram headset
CN205340145U (en)*2016-01-192016-06-29郑州轻工业学院Telecontrolled aircraft based on brain wave and muscle electric control

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2001080584A (en)*1999-09-122001-03-27Yoshitaka HiranoAutomatic operating device for airplane to be started by brain waves
CN102274032A (en)*2011-05-102011-12-14北京师范大学Driver fatigue detection system based on electroencephalographic (EEG) signals
CN102715902A (en)*2012-06-152012-10-10天津大学Emotion monitoring method for special people
CN105249961A (en)*2015-11-022016-01-20东南大学Real-time driving fatigue detection system and detection method based on Bluetooth electroencephalogram headset
CN205340145U (en)*2016-01-192016-06-29郑州轻工业学院Telecontrolled aircraft based on brain wave and muscle electric control

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
佚名: "无人机还可以这么玩-用意念控制", 《传感器世界》*

Cited By (29)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106940593B (en)*2017-02-202019-10-11上海大学 Emotiv brain-controlled UAV system and method based on mixed programming of VC++ and Matlab
CN106940593A (en)*2017-02-202017-07-11上海大学Emotiv brain control UASs and method based on VC++ and Matlab hybrid programmings
CN106927029A (en)*2017-03-032017-07-07东华大学A kind of brain control four-axle aircraft induced based on single channel brain wave
CN106774428A (en)*2017-03-172017-05-31厦门中联智创科技有限公司A kind of brain wave unmanned aerial vehicle (UAV) control method
CN106933247A (en)*2017-03-302017-07-07歌尔科技有限公司The control method of unmanned plane, apparatus and system
CN107065909A (en)*2017-04-182017-08-18南京邮电大学A kind of flight control system based on BCI
CN107168313A (en)*2017-05-172017-09-15北京汽车集团有限公司Control the method and device of vehicle drive
CN107049308A (en)*2017-06-052017-08-18湖北民族学院A kind of idea control system based on deep neural network
CN108415568A (en)*2018-02-282018-08-17天津大学The intelligent robot idea control method of complex network is migrated based on mode
CN108415568B (en)*2018-02-282020-12-29天津大学 Robotic intelligent mind control method based on modal transfer complex network
CN108200511A (en)*2018-03-172018-06-22北京工业大学A kind of intelligence meditation speaker based on EEG signals
CN108762303A (en)*2018-06-072018-11-06重庆邮电大学A kind of portable brain control UAV system and control method based on Mental imagery
CN109491510A (en)*2018-12-172019-03-19深圳市道通智能航空技术有限公司A kind of unmanned aerial vehicle (UAV) control method, apparatus, equipment and storage medium
CN110502103A (en)*2019-05-292019-11-26中国人民解放军军事科学院军事医学研究院 Brain-controlled UAV system and its control method based on brain-computer interface
CN110502103B (en)*2019-05-292020-05-12中国人民解放军军事科学院军事医学研究院 Brain-controlled UAV system based on brain-computer interface and its control method
CN110658851A (en)*2019-08-272020-01-07北京航空航天大学Unmanned aerial vehicle flight path planning system based on electroencephalogram signals
CN114586044A (en)*2019-11-082022-06-03索尼集团公司Information processing apparatus, information processing method, and information processing program
CN110716578A (en)*2019-11-192020-01-21华南理工大学Aircraft control system based on hybrid brain-computer interface and control method thereof
CN110900627A (en)*2019-11-292020-03-24哈尔滨工程大学Shooting robot device based on brain control technology and remote control technology
CN110900627B (en)*2019-11-292023-03-21哈尔滨工程大学Shooting robot device based on brain control technology and remote control technology
CN112327915A (en)*2020-11-102021-02-05大连海事大学Idea control method of unmanned aerial vehicle
CN112883914A (en)*2021-03-192021-06-01西安科技大学Mining robot idea perception and decision method combining multiple classifiers
CN112883914B (en)*2021-03-192024-03-19西安科技大学Multi-classifier combined mining robot idea sensing and decision making method
WO2023020380A1 (en)*2021-08-182023-02-23京东方科技集团股份有限公司Processing method and apparatus, control method and apparatus, vr glasses, device, and medium
CN114114274A (en)*2021-11-022022-03-01北京理工大学Unmanned aerial vehicle identification method based on brain-like auditory model
CN114504319A (en)*2022-01-302022-05-17天津大学Attention monitoring system based on brain control unmanned aerial vehicle height feedback
CN114504319B (en)*2022-01-302023-10-31天津大学 An attention monitoring system based on height feedback of brain-controlled drones
CN114740886A (en)*2022-03-292022-07-12中国电子科技集团公司第五十四研究所Brain-like autonomous operation method for patrol of unmanned aerial vehicle
CN118035638A (en)*2022-11-032024-05-14杨剑Unmanned aerial vehicle detection method, device and equipment based on data enhancement machine learning

Also Published As

Publication numberPublication date
CN106292705B (en)2019-05-31

Similar Documents

PublicationPublication DateTitle
CN106292705B (en) Multi-rotor UAV teleoperating system and operation method based on Bluetooth EEG headset
Katona et al.Speed control of Festo Robotino mobile robot using NeuroSky MindWave EEG headset based brain-computer interface
CN112990074A (en)VR-based multi-scene autonomous control mixed brain-computer interface online system
CN108762303A (en)A kind of portable brain control UAV system and control method based on Mental imagery
CN106959753A (en)Unmanned plane dummy control method and system based on Mental imagery brain-computer interface
CN107168346A (en)A kind of asynchronous system brain control UAS based on wearable display
Rosca et al.Quadcopter control using a BCI
US20170041587A1 (en)Dynamically adjustable situational awareness interface for control of unmanned vehicles
Zhang et al.A simple platform of brain-controlled mobile robot and its implementation by SSVEP
Marin et al.Drone control based on mental commands and facial expressions
CN108196566A (en)A kind of small drone cloud brain control system and its method
CN106406297A (en)Wireless electroencephalogram-based control system for controlling crawler type mobile robot
Zhou et al.Development and evaluation of BCI for operating VR flight simulator based on desktop VR equipment
CN118486455B (en)Multi-mode physiological data evaluation system based on virtual reality technology
CN114003129B (en)Idea control virtual-real fusion feedback method based on non-invasive brain-computer interface
CN106569508A (en)Unmanned aerial vehicle control method and device
Shi et al.Indoor space target searching based on EEG and EOG for UAV
CN113009931A (en)Man-machine and unmanned-machine mixed formation cooperative control device and method
Parikh et al.Quadcopter control in three-dimensional space using SSVEP and motor imagery-based brain-computer interface
CN117148773A (en)Remote unmanned aerial vehicle control method and equipment
CN112364977A (en)Unmanned aerial vehicle control method based on motor imagery signals of brain-computer interface
Janapati et al.Real-time emotion detection system using emotive and ESP-32
CN113625871B (en) VR fusion online brain-controlled flight simulation driving system
CN114460958A (en) A Brain-Computer Fusion Flight Control System Based on Hierarchical Architecture
Duval et al.The eyes and hearts of UAV pilots: observations of physiological responses in real-life scenarios

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp