Movatterモバイル変換


[0]ホーム

URL:


CN113343863A - Fusion characterization network model training method, fingerprint characterization method and equipment thereof - Google Patents

Fusion characterization network model training method, fingerprint characterization method and equipment thereof
Download PDF

Info

Publication number
CN113343863A
CN113343863ACN202110655987.0ACN202110655987ACN113343863ACN 113343863 ACN113343863 ACN 113343863ACN 202110655987 ACN202110655987 ACN 202110655987ACN 113343863 ACN113343863 ACN 113343863A
Authority
CN
China
Prior art keywords
feature
fusion
fingerprint
training
channel state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110655987.0A
Other languages
Chinese (zh)
Other versions
CN113343863B (en
Inventor
刘雯
邓中亮
陈宏�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and TelecommunicationsfiledCriticalBeijing University of Posts and Telecommunications
Priority to CN202110655987.0ApriorityCriticalpatent/CN113343863B/en
Publication of CN113343863ApublicationCriticalpatent/CN113343863A/en
Application grantedgrantedCritical
Publication of CN113343863BpublicationCriticalpatent/CN113343863B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供了一种融合表征网络模型训练方法、指纹表征方法及其设备,其中,该训练方法包括:利用多层感知机网络对信道状态信息数据进行特征提取,得到信道状态信息特征图;利用权重相同的卷积神经网络分别对各方位的图像数据进行特征提取,得到各方位的图像的特征图;对同一训练样本的所有方位的图像的特征图进行融合,得到多方位特征图;对信道状态信息特征图和多方位特征图进行拼接后构建融合表征;利用信道状态信息和图像的融合表征对应构建融合指纹库,并利用设定测度指标对进行参数优化,得到融合表征网络模型;设定测度指标用于度量融合指纹库中特征指纹之间的距离。通过上述方案能够提高特征区分度,提高定位准确度。

Figure 202110655987

The present invention provides a fusion representation network model training method, fingerprint representation method and equipment thereof, wherein the training method comprises: using a multi-layer perceptron network to perform feature extraction on channel state information data to obtain a channel state information feature map; The convolutional neural network with the same weight extracts the features of the image data in all directions, and obtains the feature maps of the images in all directions; fuses the feature maps of the images in all directions of the same training sample to obtain multi-directional feature maps; The state information feature map and the multi-directional feature map are spliced to construct a fusion representation; the fusion fingerprint library is correspondingly constructed by using the fusion representation of the channel state information and the image, and the parameters are optimized by using the set measurement index to obtain the fusion representation network model; set The measure index is used to measure the distance between the feature fingerprints in the fusion fingerprint database. The above solution can improve the feature discrimination degree and improve the positioning accuracy.

Figure 202110655987

Description

Translated fromChinese
融合表征网络模型训练方法、指纹表征方法及其设备Fusion representation network model training method, fingerprint representation method and its equipment

技术领域technical field

本发明涉及定位技术领域,尤其涉及一种融合表征网络模型训练方法、指纹表征方法及其设备。The present invention relates to the technical field of positioning, in particular to a fusion representation network model training method, a fingerprint representation method and equipment thereof.

背景技术Background technique

随着社会的信息化与智能化发展,导航、定位等信息在日常生活中占据了越来越大的比重,位置服务产业在各领域取得广泛应用。With the development of informatization and intelligence in society, information such as navigation and positioning occupies a larger and larger proportion in daily life, and the location service industry has been widely used in various fields.

针对复杂的室内环境,已有多种定位方式被提出,按照信号源可以分为基于无线保真(Wireless-Fidelity,Wi-Fi)、蓝牙、超宽带等无线电信号的定位技术,以及基于红外、超声波、视觉和惯性系统等非无线电信号的定位技术。For the complex indoor environment, a variety of positioning methods have been proposed. According to the signal source, they can be divided into positioning technologies based on wireless fidelity (Wireless-Fidelity, Wi-Fi), Bluetooth, ultra-wideband and other radio signals, and positioning technologies based on infrared, Location technology for non-radio signals such as ultrasonic, vision and inertial systems.

在众多定位源中,Wi-Fi与视觉两种信号源以其定位信息丰富、硬件成本低等优势备受关注。Wi-Fi特征包括接收信号强度(Received Signal Strength,RSS)与信道状态信息(Channel State Information,CSI),其中CSI能提供更详细的子载波信息,并且时间稳定性更强,可以取得更好的定位效果。基于CSI的定位公开报道精度已达米级,但是在复杂的室内环境中依然存在区分度不足、难以确定唯一位置的问题。Among the many positioning sources, Wi-Fi and vision signal sources have attracted much attention due to their rich positioning information and low hardware costs. Wi-Fi features include Received Signal Strength (RSS) and Channel State Information (CSI), where CSI can provide more detailed subcarrier information, and has stronger time stability, which can achieve better positioning effect. The accuracy of CSI-based positioning public reports has reached the meter level, but there are still problems of insufficient discrimination and difficulty in determining the unique location in complex indoor environments.

指纹定位算法主要是通过将室内环境中的不同位置和某种“指纹”特征建立联系,从而将定位问题看作一个“指纹”匹配的模式识别问题。任何“位置独特”特征都能被用作为位置指纹,常用位置指纹包括多径结构、接收信号强度、信道状态信息、图像特征等。指纹定位通常包含离线建库和在线匹配两个阶段。离线阶段主要完成各位置与指纹的关系建立。首先在待定位场景内确定信号采集的参考点,在各位置点上利用相关设备完成指纹数据的采集,记录指纹数据并标注参考点坐标。在线阶段主要将待定位设备测得的指纹特征与给定数据库进行对比匹配从而估计设备位置。常用的匹配算法包括K最近邻匹配、加权K最近邻匹配、神经网络等。The fingerprint localization algorithm mainly regards the localization problem as a pattern recognition problem of "fingerprint" matching by establishing the relationship between different positions in the indoor environment and certain "fingerprint" features. Any "location-unique" feature can be used as a location fingerprint. Common location fingerprints include multipath structure, received signal strength, channel state information, image features, and so on. Fingerprint positioning usually includes two stages of offline database building and online matching. The offline stage mainly completes the establishment of the relationship between each location and the fingerprint. First, determine the reference point for signal acquisition in the scene to be located, use related equipment to complete the acquisition of fingerprint data at each location point, record the fingerprint data and mark the coordinates of the reference point. The online stage mainly compares and matches the fingerprint features measured by the device to be located with a given database to estimate the device location. Commonly used matching algorithms include K nearest neighbor matching, weighted K nearest neighbor matching, neural network and so on.

基于指纹的定位算法定位精度比较高,对于硬件要求较低,无需增加额外设备就可以完成数据采集。基于视觉图像的定位技术由于无需提前部署设备、信号采集成本低等被广泛应用于导航与定位。图像匹配的室内定位具有特征稳定、受噪声影响小等优势,但是存在特征维度普遍较高、特征匹配过程较为复杂、占用资源较多、难以达到实时性与低成本的问题。The fingerprint-based positioning algorithm has high positioning accuracy and low hardware requirements, and data collection can be completed without adding additional equipment. The positioning technology based on visual images is widely used in navigation and positioning because it does not need to deploy equipment in advance and the cost of signal acquisition is low. The indoor positioning of image matching has the advantages of stable features and little influence by noise.

单定位源的指纹特征容易受环境的干扰,存在不稳定性。而且单一定位源由于信息的局限性,通常存在特征精度低、完备性不足等问题。相比之下,多传感器数据的融合指纹特征能够从不同角度给出位置信息,进一步增强指纹的精确度与稳定性。多定位传感器的数据融合具有更好的容错性、互补性、实时性、经济性。多源融合定位成为高精度室内定位的趋势。The fingerprint feature of a single location source is easily disturbed by the environment and has instability. Moreover, due to the limitation of information, a single positioning source usually has problems such as low feature accuracy and insufficient completeness. In contrast, the fusion fingerprint features of multi-sensor data can give location information from different angles, further enhancing the accuracy and stability of the fingerprint. The data fusion of multiple positioning sensors has better fault tolerance, complementarity, real-time and economy. Multi-source fusion positioning has become the trend of high-precision indoor positioning.

数据融合算法是利用多传感器信息的关键。目前已有的多源融合定位方法主要可以分为决策级融合和特征级融合:决策级融合是指在多个传感器自身的判决结果基础上,再依据一定的规则进行高层决策的方法;特征级融合是指在传感器原始数据中各自完成特征信息提取后融合形成更丰富、更稳定的特征,从而用于最终决策。Data fusion algorithms are the key to utilizing multi-sensor information. At present, the existing multi-source fusion positioning methods can be mainly divided into decision-level fusion and feature-level fusion: decision-level fusion refers to the method of making high-level decisions based on the judgment results of multiple sensors themselves and according to certain rules; feature-level fusion Fusion refers to the extraction of feature information in the original sensor data and the fusion to form richer and more stable features, which are then used for final decision-making.

相比决策级融合,基于特征级融合的定位流程更加简洁,系统的实时性与可用性进一步增强。然而现有融合定位系统中的异源异构特征融合算法大多数仍然停留在特征组合或筛选上,难以在根本上改变指纹特征区分度不足的问题。Compared with decision-level fusion, the positioning process based on feature-level fusion is more concise, and the real-time performance and availability of the system are further enhanced. However, most of the heterologous and heterogeneous feature fusion algorithms in the existing fusion positioning system still stay on feature combination or screening, and it is difficult to fundamentally change the problem of insufficient discrimination of fingerprint features.

发明内容SUMMARY OF THE INVENTION

有鉴于此,本发明提供了一种融合表征网络模型训练方法、指纹表征方法及其设备,以解决现有技术存在的一种或多种缺陷。In view of this, the present invention provides a fusion characterization network model training method, fingerprint characterization method and equipment thereof, so as to solve one or more defects existing in the prior art.

为了达到上述目的,本发明采用以下方案实现:In order to achieve the above object, the present invention adopts the following scheme to realize:

根据本发明实施例的一个方面,提供了一种融合表征网络模型训练方法,包括:According to an aspect of the embodiments of the present invention, a method for training a fusion representation network model is provided, including:

获取训练样本集,每个训练样本包括同一环境位置对应的信道状态信息数据和多个方位的图像数据;Obtain a training sample set, each training sample includes channel state information data corresponding to the same environmental location and image data of multiple orientations;

利用多层感知机网络对训练样本中的信道状态信息数据进行特征提取,得到相应训练样本对应的信道状态信息特征图;The multi-layer perceptron network is used to perform feature extraction on the channel state information data in the training samples, and the channel state information feature maps corresponding to the corresponding training samples are obtained;

利用权重相同的卷积神经网络分别对训练样本中的各方位的图像数据进行特征提取,得到相应训练样本对应的各方位的图像的特征图;Use the same weighted convolutional neural network to perform feature extraction on the image data of each orientation in the training sample respectively, and obtain the feature map of the image of each orientation corresponding to the corresponding training sample;

对同一训练样本对应的所有方位的图像的特征图进行融合,得到相应训练样本对应的多方位特征图;Fusion of the feature maps of the images of all orientations corresponding to the same training sample to obtain multi-directional feature maps corresponding to the corresponding training samples;

利用特征融合层对同一训练样本对应的信道状态信息特征图和多方位特征图进行拼接后构建相应训练样本对应的信道状态信息和图像的融合表征;The channel state information feature map and the multi-directional feature map corresponding to the same training sample are spliced by the feature fusion layer, and the fusion representation of the channel state information and the image corresponding to the corresponding training sample is constructed;

利用各训练样本对应的信道状态信息和图像的融合表征对应的特征指纹构建融合指纹库,基于融合指纹库并利用设定测度指标对包含所述多层感知机网络、所述卷积神经网络及所述特征融合层的网络模型进行参数优化,以使同一环境位置的特征指纹之间的距离近及使不同环境位置的特征指纹之间的距离远,从而得到训练后的网络模型,作为融合表征网络模型;其中,设定测度指标用于度量融合指纹库中特征指纹之间的距离。A fusion fingerprint database is constructed by using the channel state information corresponding to each training sample and the characteristic fingerprint corresponding to the fusion representation of the image. The network model of the feature fusion layer is optimized for parameters, so that the distance between the feature fingerprints of the same environmental location is close and the distance between the feature fingerprints of different environmental locations is far, so as to obtain the network model after training, as the fusion representation The network model; wherein, the measurement index is set to measure the distance between the feature fingerprints in the fusion fingerprint database.

在一些实施例中,训练样本中的信道状态信息数据为信道状态幅度数据。In some embodiments, the channel state information data in the training samples is channel state amplitude data.

在一些实施例中,对同一训练样本对应的所有方位的图像的特征图进行融合,得到相应训练样本对应的多方位特征图,包括:In some embodiments, the feature maps of images of all orientations corresponding to the same training sample are fused to obtain multi-orientation feature maps corresponding to the corresponding training samples, including:

通过对同一训练样本对应的所有方位的图像的特征图进行叠加融合,得到相应训练样本对应的多方位特征图。The multi-directional feature maps corresponding to the corresponding training samples are obtained by superimposing and fusing the feature maps of the images of all orientations corresponding to the same training sample.

在一些实施例中,利用权重相同的卷积神经网络分别对训练样本中的各方位的图像数据进行特征提取,得到相应训练样本对应的各方位的图像的特征图,包括:In some embodiments, feature extraction is performed on the image data of each orientation in the training sample by using a convolutional neural network with the same weight to obtain a feature map of the image in each orientation corresponding to the corresponding training sample, including:

利用权重相同的卷积神经网络分别对训练样本中的各方位的图像数据进行一维关键特征提取,得到相应训练样本对应的各方位的图像的特征图。Using a convolutional neural network with the same weight, one-dimensional key feature extraction is performed on the image data of each orientation in the training sample, and a feature map of the image in each orientation corresponding to the corresponding training sample is obtained.

在一些实施例中,所述卷积神经网络包括:卷积层、池化层及平铺层;In some embodiments, the convolutional neural network includes: a convolutional layer, a pooling layer, and a tiling layer;

利用权重相同的卷积神经网络分别对训练样本中的各方位的图像数据进行一维关键特征提取,得到相应训练样本对应的各方位的图像的特征图,包括:Use the same weighted convolutional neural network to extract one-dimensional key features of the image data of each orientation in the training sample, and obtain the feature map of the image corresponding to each orientation of the training sample, including:

利用卷积层对训练样本中的每个方位的图像数据进行卷积操作,得到多维特征图;Use the convolution layer to perform convolution operation on the image data of each orientation in the training sample to obtain a multi-dimensional feature map;

利用池化层对多维特征图进行最大池化操作,以得到简化及降维后的特征图;Use the pooling layer to perform a maximum pooling operation on the multi-dimensional feature map to obtain a simplified and reduced-dimensional feature map;

依次利用ReLu激活函数对简化及降维后的特征图进行非线性变换,利用Dropout策略随机丢弃部分神经元节点,以及利用平铺层进行平铺展开,得到相应训练样本对应的各方位的图像的特征图。In turn, the ReLu activation function is used to perform nonlinear transformation on the simplified and dimensionally reduced feature maps, the dropout strategy is used to randomly discard some neuron nodes, and the tiling layer is used for tiling to obtain the corresponding training samples. feature map.

在一些实施例中,利用各训练样本对应的信道状态信息和图像的融合表征对应的特征指纹构建融合指纹库,包括:In some embodiments, a fusion fingerprint library is constructed using the channel state information corresponding to each training sample and the characteristic fingerprint corresponding to the fusion representation of the image, including:

将各训练样本对应的信道状态信息和图像的融合表征对应的特征指纹按三元组进行划分,从而形成融合指纹库;其中,每个三元组包含特征指纹锚样本、与特征指纹锚样本位置相同的特征指纹正样本、及与特征指纹锚样本位置不同的特征指纹负样本;The channel state information corresponding to each training sample and the feature fingerprint corresponding to the fusion representation of the image are divided into triples to form a fusion fingerprint library; wherein each triplet contains a feature fingerprint anchor sample, and the location of the feature fingerprint anchor sample The same positive sample of feature fingerprint, and the negative sample of feature fingerprint that is different from the anchor sample of feature fingerprint;

基于融合指纹库并利用设定测度指标对包含所述多层感知机网络、所述卷积神经网络及所述特征融合层的网络模型进行参数优化,包括:Based on the fusion fingerprint database and using the set measurement index, the parameters of the network model including the multi-layer perceptron network, the convolutional neural network and the feature fusion layer are optimized, including:

以设定测度指标的相反数作为最小化的目标函数,利用Adam算法并基于融合指纹库对包含所述多层感知机网络、所述卷积神经网络及所述特征融合层的网络模型进行参数优化。Taking the opposite number of the set measurement index as the objective function of minimization, using Adam algorithm and based on the fusion fingerprint library to parameterize the network model including the multi-layer perceptron network, the convolutional neural network and the feature fusion layer optimization.

在一些实施例中,所述目标函数表示为:In some embodiments, the objective function is expressed as:

Figure BDA0003112778440000041
Figure BDA0003112778440000041

其中,L表示目标函数的值,D表示设定测度指标,N为三元组的总数,via、vip、vin分别表示特征指纹锚样本、特征指纹正样本、特征指纹负样本,α表示可调参数。Among them, L represents the value of the objective function, D represents the set measurement index, N represents the total number of triples, via , vip , and vin represent the characteristic fingerprint anchor sample, the characteristic fingerprint positive sample, and the characteristic fingerprint negative respectively. sample, α represents a tunable parameter.

根据本发明实施例的另一个方面,还提供了一种融合特征指纹表征方法,包括:According to another aspect of the embodiments of the present invention, a method for characterizing a fusion feature fingerprint is also provided, including:

获取设定环境位置处的信道状态信息数据和多个方位的图像数据;Obtain the channel state information data and the image data of multiple orientations at the set environmental position;

利用上述任一实施例所述方法训练得到的融合表征网络模型对信道状态信息数据和多个方位的图像数据进行处理,得到设定环境位置处的特征指纹。The channel state information data and the image data of multiple orientations are processed by using the fusion representation network model trained by the method described in any of the above embodiments to obtain the characteristic fingerprint at the set environment position.

根据本发明实施例的另一个方面,还提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述任一实施例所述方法的步骤。According to another aspect of the embodiments of the present invention, an electronic device is also provided, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the above-mentioned program when the processor executes the program The steps of the method of any one of the embodiments.

根据本发明实施例的另一个方面,还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任一实施例所述方法的步骤。According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the steps of the method described in any of the foregoing embodiments.

本发明实施例的融合表征网络模型训练方法、融合特征指纹表征方法、电子设备及计算机可读存储介质,通过信道状态信息和多方位图像的融合表征网络两种异源异构定位数据的深度融合,由于该两种异构数据存在一定互补性,所以能丰富定位信息,有助于提高定位准确性。而且,利用具有相同权重的卷积网络对不同方位的图像进行特征提取,能够使得各方位特征提取结果具有一定方位关联性,将不同方位特征融合在一起,能够消除方位差异的影响,使得信息更丰富。此外,利用测度指标优化网络模型参数,能够提高特征区分度,能够使得定位结果更准确。The fusion characterization network model training method, fusion feature fingerprint characterization method, electronic device, and computer-readable storage medium according to the embodiments of the present invention characterize the deep fusion of two heterogeneous and heterogeneous positioning data in the network through the fusion of channel state information and multi-directional images. , because the two kinds of heterogeneous data have certain complementarity, they can enrich the positioning information and help to improve the positioning accuracy. Moreover, using the convolutional network with the same weight to perform feature extraction on images of different orientations can make the feature extraction results of each orientation have a certain orientation correlation, and the fusion of different orientation features can eliminate the influence of orientation differences and make the information more accurate. Rich. In addition, optimizing the parameters of the network model by using the measurement index can improve the feature discrimination and make the positioning result more accurate.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。在附图中:In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative efforts. In the attached image:

图1是本发明一实施例的融合表征网络模型训练方法的流程示意图;1 is a schematic flowchart of a method for training a fusion representation network model according to an embodiment of the present invention;

图2是本发明一具体实施例中CSI和图像的融合表征网络模型构成示意图;FIG. 2 is a schematic diagram of the structure of a fusion representation network model of CSI and an image in a specific embodiment of the present invention;

图3是本发明一实施例中的融合表征模型的参数优化框架结构示意图。FIG. 3 is a schematic structural diagram of a parameter optimization framework of a fusion characterization model according to an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明实施例的目的、技术方案和优点更加清楚明白,下面结合附图对本发明实施例做进一步详细说明。在此,本发明的示意性实施例及其说明用于解释本发明,但并不作为对本发明的限定。In order to make the purposes, technical solutions and advantages of the embodiments of the present invention more clearly understood, the embodiments of the present invention will be further described in detail below with reference to the accompanying drawings. Here, the exemplary embodiments of the present invention and their descriptions are used to explain the present invention, but not to limit the present invention.

针对复杂室内环境下单定位源的特征精度低、完备性不足以及当前融合算法难以改变特征区分度的问题,本发明的发明人考虑到,CSI(信道状态信息)与图像定位存在一定互补性,融合数据能够丰富定位信息。多源融合定位可以有效缓解复杂环境下单定位源存在的特征精度低、完备性不足等问题。Aiming at the problems of low feature accuracy and insufficient completeness of a single positioning source in a complex indoor environment, and it is difficult for the current fusion algorithm to change the feature discrimination, the inventors of the present invention consider that CSI (channel state information) and image positioning have certain complementarities, Fusion data can enrich positioning information. Multi-source fusion positioning can effectively alleviate the problems of low feature accuracy and insufficient completeness of single positioning source in complex environments.

而且,现有融合定位系统中的异源异构特征融合算法大多数仍然停留在特征组合或筛选上,尚缺少异构特征的统一融合表征方法,所以难以在根本上改变指纹特征区分度不足的问题,而将测度指标引入融合表征能够进一步提高指纹区分度。Moreover, most of the heterogeneous feature fusion algorithms in the existing fusion positioning system still stay on feature combination or screening, and there is still a lack of a unified fusion representation method for heterogeneous features, so it is difficult to fundamentally change the insufficient discrimination of fingerprint features. However, the introduction of the measurement index into the fusion representation can further improve the fingerprint discrimination.

基于此,针对信道状态信息和多方位图像两种异源异构数据的高区分度融合表征问题,本发明提出了一种融合表征网络模型训练方法,以针对CSI与图像两种异源异构定位数据进行特征级深度融合,并以测度指标为优化目标进行网络参数训练,从而得到高区分度的融合表征模型。Based on this, in view of the problem of high-discrimination fusion representation of two kinds of heterogeneous data, channel state information and multi-directional images, the present invention proposes a fusion representation network model training method, so as to solve the problem of heterogeneous data of CSI and images. The positioning data is deeply fused at the feature level, and the network parameters are trained with the measurement index as the optimization target, so as to obtain a highly discriminative fusion representation model.

图1是本发明一实施例的融合表征网络模型训练方法的流程示意图,如图1所示,该实施例的融合表征网络模型训练方法可包括以下步骤S110~步骤S160。FIG. 1 is a schematic flowchart of a method for training a fusion representation network model according to an embodiment of the present invention. As shown in FIG. 1 , the method for training a fusion representation network model in this embodiment may include the following steps S110 to S160 .

下面将对步骤S110至步骤S160的具体实施方式进行详细说明。Specific implementations of steps S110 to S160 will be described in detail below.

步骤S110:获取训练样本集,每个训练样本包括同一环境位置对应的信道状态信息数据和多个方位的图像数据。Step S110: Acquire a training sample set, where each training sample includes channel state information data corresponding to the same environmental location and image data of multiple orientations.

该步骤S110中,环境位置例如可以是室内环境的一个位置点的位置。对于同一位置点从不同方位拍摄图像可以得到多个方位的图像数据。方位数量可以使需要确定,如2个、3个、4个、5等。信道状态信息数据例如可以为WiFi信号的数据。In this step S110, the environment location may be, for example, the location of a location point of the indoor environment. Image data of multiple orientations can be obtained by capturing images from different orientations for the same location point. The number of orientations can be determined as needed, such as 2, 3, 4, 5 and so on. The channel state information data may be, for example, data of WiFi signals.

训练样本中的信道状态信息数据为信道状态幅度数据,具体地,可以为信道状态幅度原始数据(CSI数据)。在其他实施例中,信道状态信息数据可以为信道状态相位数据,具体地可以是通常对信道状态相位原始数据校正得到。The channel state information data in the training samples is channel state amplitude data, and specifically, may be channel state amplitude raw data (CSI data). In other embodiments, the channel state information data may be channel state phase data, and may specifically be obtained by generally correcting the original channel state phase data.

步骤S120:利用多层感知机网络对训练样本中的信道状态信息数据进行特征提取,得到相应训练样本对应的信道状态信息特征图。Step S120: Using a multilayer perceptron network to perform feature extraction on the channel state information data in the training samples, to obtain a channel state information feature map corresponding to the corresponding training samples.

该步骤S120中,多层感知机网络的层数例如可以是三层。信道状态信息数据可以是向量形式,多层感知机网络可以很好地对向量数据进行特征提取。In this step S120, the number of layers of the multilayer perceptron network may be, for example, three layers. The channel state information data can be in the form of a vector, and the multi-layer perceptron network can perform feature extraction on the vector data very well.

步骤S130:利用权重相同的卷积神经网络分别对训练样本中的各方位的图像数据进行特征提取,得到相应训练样本对应的各方位的图像的特征图。Step S130 : use the convolutional neural network with the same weight to perform feature extraction on the image data of each orientation in the training sample respectively, to obtain a feature map of the image of each orientation corresponding to the corresponding training sample.

该步骤S130中,权重相同的卷积神经网络可以是指利用该卷积神经网络(CNN网络)对每个方位的图像经特征提取时,所用卷积神经网络中的权重相同,以此能够使得不同方位的图像的特征图存在位置关联。激活函数可以为ReLu激活函数,可以采用Relu激活函数完成非线性变换。In this step S130, the convolutional neural network with the same weight may mean that when the convolutional neural network (CNN network) is used to extract the features of the image of each orientation, the weights in the used convolutional neural network are the same, so that the There are positional correlations between the feature maps of images of different orientations. The activation function can be a ReLu activation function, and the Relu activation function can be used to complete the nonlinear transformation.

具体实施时,步骤S130,即,利用权重相同的卷积神经网络分别对训练样本中的各方位的图像数据进行特征提取,得到相应训练样本对应的各方位的图像的特征图,具体可包括步骤:S131,利用权重相同的卷积神经网络分别对训练样本中的各方位的图像数据进行一维关键特征提取,得到相应训练样本对应的各方位的图像的特征图。During specific implementation, step S130, that is, using a convolutional neural network with the same weight to perform feature extraction on the image data of each orientation in the training sample, respectively, to obtain a feature map of the image in each orientation corresponding to the corresponding training sample, which may specifically include steps : S131 , using a convolutional neural network with the same weight to separately perform one-dimensional key feature extraction on the image data of each position in the training sample, to obtain a feature map of the image corresponding to the corresponding training sample in each position.

该实施例中,通过提取一维关键特征,能够消除特征提取阶段的方位差异性。In this embodiment, by extracting one-dimensional key features, the orientation difference in the feature extraction stage can be eliminated.

进一步地实施例中,上述步骤S130中,所述卷积神经网络可包括:卷积层、池化层及平铺层。具体实施时,步骤S131,即,利用权重相同的卷积神经网络分别对训练样本中的各方位的图像数据进行一维关键特征提取,得到相应训练样本对应的各方位的图像的特征图,具体可包括步骤:S1311,利用卷积层对训练样本中的每个方位的图像数据进行卷积操作,得到多维特征图;S1312,利用池化层对多维特征图进行最大池化操作,以得到简化及降维后的特征图;S1313,依次利用ReLu激活函数对简化及降维后的特征图进行非线性变换,利用Dropout策略随机丢弃部分神经元节点,以及利用平铺层进行平铺展开,得到相应训练样本对应的各方位的图像的特征图。In a further embodiment, in the above step S130, the convolutional neural network may include: a convolutional layer, a pooling layer and a tiling layer. During specific implementation, step S131, that is, using a convolutional neural network with the same weight to perform one-dimensional key feature extraction on the image data of each position in the training sample, to obtain a feature map of the image in each position corresponding to the corresponding training sample, specifically It may include steps: S1311, using a convolution layer to perform a convolution operation on the image data of each orientation in the training sample to obtain a multi-dimensional feature map; S1312, using a pooling layer to perform a maximum pooling operation on the multi-dimensional feature map to simplify and the feature map after dimensionality reduction; S1313, sequentially use the ReLu activation function to perform nonlinear transformation on the simplified and dimensionally reduced feature map, use the Dropout strategy to randomly discard some neuron nodes, and use the tiling layer to tile and expand, and obtain The feature maps of the images in each orientation corresponding to the corresponding training samples.

激活函数主要可用于在降维过程中进行非线性变换,增强变换拟合过程中的复杂性。简化与降维过程包括线性变换与非线性变换,线性变换与非线性变换可共同完成数据域的转换,变换后得到的结果可称为特征图。Dropout策略是对神经网络中的中间节点(隐藏节点)的处理,在训练过程中进行随机丢弃可以增加样本多样性,提升模型鲁棒性。“平铺展开”属于网络中的一层,该层的操作主要是将多维的特征图按规律展开为一维向量后,再连接下一层。The activation function can be mainly used for nonlinear transformation in the process of dimensionality reduction to enhance the complexity of the transformation fitting process. The process of simplification and dimensionality reduction includes linear transformation and nonlinear transformation. Linear transformation and nonlinear transformation can jointly complete the transformation of the data domain, and the result obtained after transformation can be called a feature map. The Dropout strategy is the processing of intermediate nodes (hidden nodes) in the neural network. Random discarding during the training process can increase the diversity of samples and improve the robustness of the model. "Tile expansion" belongs to a layer in the network. The operation of this layer is mainly to expand the multi-dimensional feature map into a one-dimensional vector according to the rules, and then connect to the next layer.

该实施例中,简化及降维后的特征图,扩大感受野同时防止过拟合。通过Dropout策略使得计算结果包含更多随机结构。通过平铺展开可以得到一维特征。In this embodiment, the simplified and dimension-reduced feature map expands the receptive field and prevents overfitting. Through the Dropout strategy, the calculation result contains more random structures. One-dimensional features can be obtained by tiling expansion.

步骤S140:对同一训练样本对应的所有方位的图像的特征图进行融合,得到相应训练样本对应的多方位特征图。Step S140 : fuse feature maps of images in all orientations corresponding to the same training sample to obtain multi-directional feature maps corresponding to the corresponding training sample.

该步骤S140中,通过将各方位的特征进行融合,能够得到信息更加丰富的新特征向量。In this step S140, a new feature vector with more abundant information can be obtained by fusing the features of each azimuth.

具体实施时,采用的融合策略可以是叠加或拼接。During specific implementation, the adopted fusion strategy may be superposition or splicing.

示例性地,该步骤S140,即,对同一训练样本对应的所有方位的图像的特征图进行融合,得到相应训练样本对应的多方位特征图,具体可包括步骤:S141,通过对同一训练样本对应的所有方位的图像的特征图进行叠加融合,得到相应训练样本对应的多方位特征图。Exemplarily, this step S140, that is, fusing the feature maps of images of all orientations corresponding to the same training sample, to obtain the multi-directional feature maps corresponding to the corresponding training samples, may specifically include the step: S141, by corresponding to the same training sample. The feature maps of the images of all orientations are superimposed and fused to obtain the multi-orientation feature maps corresponding to the corresponding training samples.

该示例中,通过叠加策略可将不同方位图像中提取出的特征逐元素叠加实现无序融合。通过叠加方式可完成信息之间的叠加,可在特征维度不变的情况下增加各维度的信息量。以此,能在完成对于多方位特征的无序融合同时降低计算量。In this example, the features extracted from images of different orientations can be superimposed element by element to achieve disorder fusion through the superposition strategy. The superposition between information can be completed by the superposition method, and the amount of information of each dimension can be increased under the condition that the feature dimension remains unchanged. In this way, the amount of computation can be reduced while completing the disordered fusion of multi-directional features.

在其他实施例中,可以采用拼接方式进行融合,具体地,可以按方位顺序进行拼接。In other embodiments, the fusion may be performed in a splicing manner, and specifically, the splicing may be performed in the order of orientation.

步骤S150:利用特征融合层对同一训练样本对应的信道状态信息特征图和多方位特征图进行拼接后构建相应训练样本对应的信道状态信息和图像的融合表征。Step S150 : using the feature fusion layer to splicing the channel state information feature map and the multi-directional feature map corresponding to the same training sample, and then constructing a fusion representation of the channel state information and the image corresponding to the corresponding training sample.

该步骤S150中,特征融合层可以包括感知层,例如可以是基本感知层。In this step S150, the feature fusion layer may include a perception layer, such as a basic perception layer.

步骤S160:利用各训练样本对应的信道状态信息和图像的融合表征对应的特征指纹构建融合指纹库,基于融合指纹库并利用设定测度指标对包含所述多层感知机网络、所述卷积神经网络及所述特征融合层的网络模型进行参数优化,以使同一环境位置的特征指纹之间的距离近及使不同环境位置的特征指纹之间的距离远,从而得到训练后的网络模型,作为融合表征网络模型;其中,设定测度指标用于度量融合指纹库中特征指纹之间的距离。Step S160: Construct a fusion fingerprint database by using the channel state information corresponding to each training sample and the characteristic fingerprint corresponding to the fusion representation of the image, and based on the fusion fingerprint database and using the set measurement index, the parameters including the multi-layer perceptron network, the convolution The neural network and the network model of the feature fusion layer are optimized for parameters, so that the distance between the feature fingerprints of the same environmental location is close and the distance between the feature fingerprints of different environmental locations is far, so as to obtain the trained network model, As a fusion representation network model; wherein, a measurement index is set to measure the distance between feature fingerprints in the fusion fingerprint database.

具体实施时,上述步骤S160中,利用各训练样本对应的信道状态信息和图像的融合表征对应的特征指纹构建融合指纹库,具体可包括步骤:S161,将各训练样本对应的信道状态信息和图像的融合表征对应的特征指纹按三元组进行划分,从而形成融合指纹库;其中,每个三元组包含特征指纹锚样本、与特征指纹锚样本位置相同的特征指纹正样本、及与特征指纹锚样本位置不同的特征指纹负样本。In the specific implementation, in the above step S160, use the channel state information corresponding to each training sample and the feature fingerprint corresponding to the fusion representation of the image to construct a fusion fingerprint database, which may specifically include steps: S161, the channel state information corresponding to each training sample and the image. The feature fingerprints corresponding to the fusion representation of the Feature fingerprint negative samples with different anchor sample positions.

上述步骤S160中,基于融合指纹库并利用设定测度指标对包含所述多层感知机网络、所述卷积神经网络及所述特征融合层的网络模型进行参数优化,具体可包括步骤:S162,以设定测度指标的相反数作为最小化的目标函数,利用Adam算法并基于融合指纹库对包含所述多层感知机网络、所述卷积神经网络及所述特征融合层的网络模型进行参数优化。In the above step S160, parameter optimization is performed on the network model including the multi-layer perceptron network, the convolutional neural network and the feature fusion layer based on the fusion fingerprint database and using the set measurement index, which may specifically include steps: S162 , using the inverse of the set measurement index as the objective function of minimization, using the Adam algorithm and based on the fusion fingerprint library to carry out the network model including the multi-layer perceptron network, the convolutional neural network and the feature fusion layer. Parameter optimization.

例如,所述目标函数可以表示为:For example, the objective function can be expressed as:

Figure BDA0003112778440000081
Figure BDA0003112778440000081

其中,L表示目标函数的值,D表示设定测度指标,N为三元组的总数,via、vip、vin分别表示特征指纹锚样本、特征指纹正样本、特征指纹负样本,α表示可调参数。Among them, L represents the value of the objective function, D represents the set measurement index, N represents the total number of triples, via , vip , and vin represent the characteristic fingerprint anchor sample, the characteristic fingerprint positive sample, and the characteristic fingerprint negative respectively. sample, α represents a tunable parameter.

另外,本发明实施例还提供了一种融合特征指纹表征方法,其可包括步骤:In addition, the embodiment of the present invention also provides a fusion feature fingerprint representation method, which may include the steps:

S210:获取设定环境位置处的信道状态信息数据和多个方位的图像数据;S210: Acquire channel state information data and image data of multiple azimuths at the set environmental location;

S220:利用本发明任一实施例所述的融合表征网络模型训练方法训练得到的融合表征网络模型对信道状态信息数据和多个方位的图像数据进行处理,得到设定环境位置处的特征指纹。S220: Process the channel state information data and the image data of multiple orientations by using the fusion representation network model trained by the fusion representation network model training method according to any embodiment of the present invention to obtain the characteristic fingerprint at the set environment location.

一个环境位置点可以对应一个或多个指纹特征,对于一个环境而言,由多个不同位置点及其对应的特征指纹,可以构成指纹库,供定位使用。An environmental location point can correspond to one or more fingerprint features. For an environment, a fingerprint database can be formed by a plurality of different location points and their corresponding feature fingerprints for positioning.

此外,本发明实施例还提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述任一实施例所述的融合表征网络模型训练方法或任一实施例所述的融合特征指纹表征方法的步骤。In addition, an embodiment of the present invention also provides an electronic device, including a memory, a processor, and a computer program stored in the memory and running on the processor, the processor implements any of the foregoing embodiments when the processor executes the program The steps of the fusion characterization network model training method or the fusion feature fingerprint characterization method described in any embodiment.

本发明实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任一实施例所述的融合表征网络模型训练方法或任一实施例所述的融合特征指纹表征方法的步骤。Embodiments of the present invention also provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method for training a fusion-representation network model described in any of the foregoing embodiments or any of the methods described in any of the foregoing embodiments. The steps of the fusion feature fingerprint characterization method described above.

本发明实施例,通过神经网络将异源异构的CSI数据与多方位图像数据映射到融合表征域,通过最大化融合指纹的区分度进行表征网络的参数优化,从而得到最终的融合表征网络,能够用于CSI-图像的高区分度融合指纹的构建与匹配定位。CSI与图像定位存在一定互补性,融合数据能够丰富定位信息。信道状态信息和多方位图像的融合表征网络两种异源异构定位数据的深度融合,能够增强指纹区分度、提高定位准确性。In the embodiment of the present invention, the heterogeneous CSI data and the multi-directional image data are mapped to the fusion representation domain through a neural network, and the parameters of the representation network are optimized by maximizing the discrimination degree of the fusion fingerprint, so as to obtain the final fusion representation network, It can be used for the construction and matching positioning of highly discriminative fused fingerprints of CSI-images. There is a certain complementarity between CSI and image positioning, and fusion data can enrich positioning information. The fusion of channel state information and multi-directional images represents the deep fusion of two heterogeneous positioning data in the network, which can enhance fingerprint discrimination and improve positioning accuracy.

下面结合一个具体实施例对上述方法进行说明,然而,值得注意的是,该具体实施例仅是为了更好地说明本申请,并不构成对本申请的不当限定。The above method will be described below with reference to a specific embodiment. However, it should be noted that the specific embodiment is only for better illustrating the present application, and does not constitute an improper limitation of the present application.

针对复杂室内环境下单定位源的特征精度低、完备性不足以及当前融合算法难以改变特征区分度的问题,本实施例的用于信道状态信息和多方位图像的融合表征网络,通过实现两种异源异构定位数据的深度融合,增强指纹区分度、提高定位准确性。Aiming at the problems of low feature accuracy and insufficient completeness of a single location source in a complex indoor environment, and the current fusion algorithm is difficult to change the feature discrimination degree, the fusion representation network for channel state information and multi-directional images in this embodiment realizes two The deep fusion of heterogeneous positioning data enhances fingerprint discrimination and improves positioning accuracy.

本实施例的融合指纹表征方法基于一种融合表征网络模型实现。该网络模型采用共享卷积神经网络(Convolutional Neural Networks,CNN)和叠加策略得到多方位图像特征,采用多层感知机提取CSI幅度特征,对联合后特征进行降维得到最终融合表征,整个网络中的参数优化主要通过对融合表征指纹库的测度指标最大化完成。The fusion fingerprint characterization method of this embodiment is implemented based on a fusion characterization network model. The network model uses a shared convolutional neural network (CNN) and stacking strategy to obtain multi-directional image features, uses a multi-layer perceptron to extract CSI amplitude features, and reduces the dimension of the combined features to obtain the final fusion representation. The parameter optimization is mainly accomplished by maximizing the measurement index of the fusion representation fingerprint database.

图2是本发明一具体实施例中CSI和图像的融合表征网络模型构成示意图。参见图2,该网络模型的两部分输入分别为CSI原始幅度数据和多方位图像原始数据(多方位图像原始数据”:是指在某个位置点处,利用摄像机从不同方位处采集到的室内场景图像,不经过其他预处理,即多方位图像原始数据)(以四个方位为例)。数据包中单条CSI原始幅度的数据结构可以表示为:FIG. 2 is a schematic diagram showing the structure of a network model of fusion representation of CSI and images in a specific embodiment of the present invention. Referring to Fig. 2, the two parts of the input of the network model are CSI raw amplitude data and multi-directional image raw data (multi-directional image raw data": refers to the indoor data collected by the camera from different directions at a certain location point. Scene image, without other preprocessing, i.e. multi-azimuth image raw data) (take four directions as an example). The data structure of a single CSI raw amplitude in the data packet can be expressed as:

CSI=[CSI1,…,CSIi,…,CSIK] (1)CSI=[CSI1 ,...,CSIi ,...,CSIK ] (1)

其中,K为信道中的子载波总数,CSIi表示信道中的子载波i的幅度。Wherein, K is the total number of subcarriers in the channel, and CSIi represents the amplitude of subcarrier i in the channel.

多方位图像的数据结构可以表示为:The data structure of the multi-directional image can be expressed as:

Figure BDA0003112778440000101
Figure BDA0003112778440000101

其中,Imagei代表第i个方位的图像,M和N分别代表单幅图像的像素行数与列数。Among them, Imagei represents the image in the ith orientation, and M and N represent the number of pixel rows and columns of a single image, respectively.

CSI幅度与多方位图像是两种异源异构的定位数据,为获得其融合表征,网络需要先对两种数据分别进行特征提取与处理,实现两种数据的降维以及同构化。CSI amplitude and multi-azimuth images are two kinds of heterogeneous positioning data. In order to obtain their fusion representation, the network needs to extract and process the features of the two kinds of data separately to achieve dimensionality reduction and homogenization of the two kinds of data.

为实现CSI幅度向量的关键特征提取,采用经典的多层感知机对幅度向量进行处理,各层均通过权重与偏置完成线性变换、通过激活函数完成非线性变换,层数的增加可以增强网络对于复杂非线性的拟合度,一般选用三层即可满足大部分要求。感知机中第l-1层神经元到第l层神经元的递推公式可以表示为:In order to extract the key features of the CSI amplitude vector, the classical multi-layer perceptron is used to process the amplitude vector. Each layer completes linear transformation through weights and offsets, and completes nonlinear transformation through activation functions. The increase in the number of layers can enhance the network. For the fitting degree of complex nonlinearity, generally three layers can be used to meet most of the requirements. The recursive formula from the neurons in layer l-1 to the neurons in layer l in the perceptron can be expressed as:

Figure BDA0003112778440000102
Figure BDA0003112778440000102

Figure BDA0003112778440000103
Figure BDA0003112778440000103

其中,

Figure BDA0003112778440000104
为第l层第j个神经元的值,
Figure BDA0003112778440000105
为第l层第i个神经元的值,
Figure BDA0003112778440000106
为第l-1层第j个神经元
Figure BDA0003112778440000107
与第l层第i个神经元
Figure BDA0003112778440000108
间的连接权重,
Figure BDA0003112778440000109
为第l层第i个神经元对应的偏置,激活函数Φ为ReLu函数,表达式如下:in,
Figure BDA0003112778440000104
is the value of the jth neuron in the lth layer,
Figure BDA0003112778440000105
is the value of the i-th neuron in the l-th layer,
Figure BDA0003112778440000106
is the jth neuron in the l-1 layer
Figure BDA0003112778440000107
with the ith neuron in the lth layer
Figure BDA0003112778440000108
The connection weight between
Figure BDA0003112778440000109
is the bias corresponding to the i-th neuron in the l-th layer, the activation function Φ is the ReLu function, and the expression is as follows:

ReLU(x)=max(0,x) (5)ReLU(x)=max(0,x) (5)

其中,x为神经元的数值。where x is the value of the neuron.

感知机的最终输出特征为:The final output features of the perceptron are:

Figure BDA00031127784400001010
Figure BDA00031127784400001010

其中,L为多层感知机总层数,yi为输出的CSI特征yCSI的第i维节点值。Among them, L is the total number of layers of the multi-layer perceptron, and yi is thei -th dimension node value of the output CSI feature yCSI .

为实现高维多方位图像向量的关键特征提取,首先设计共享CNN模型对各方位图像进行一维关键特征提取,以提取各方位图像中与位置相关的关键特征,同时消除特征提取阶段的方位差异性;接着将各方位的关键特征进行叠加融合得到信息更加丰富的新特征向量,以完成对于多方位特征的无序融合同时降低计算量。In order to realize the key feature extraction of high-dimensional multi-azimuth image vectors, firstly, a shared CNN model is designed to perform one-dimensional key feature extraction on each azimuth image, so as to extract the position-related key features in each azimuth image, and at the same time eliminate the azimuth difference in the feature extraction stage. Then, the key features of each azimuth are superimposed and fused to obtain a new feature vector with richer information, so as to complete the disordered fusion of multi-directional features and reduce the amount of calculation.

对于共享CNN网络模型而言,CNN网络包含卷积、池化、Dropout以及平铺层。主要采用卷积方法对输入的方位n图像Imagen进行旋转不变性与平移不变性特征提取,二维卷积公式如下:For the shared CNN network model, the CNN network includes convolution, pooling, dropout, and tiling layers. The convolution method is mainly used to extract the rotation-invariant and translation-invariant features of the input orientation n image Imagen . The two-dimensional convolution formula is as follows:

Figure BDA0003112778440000111
Figure BDA0003112778440000111

其中,X和W分别表示整体的输入与权重,n_in代表输入数据的通道总数,Xk为第k通道的输入矩阵,Wk为对应第k通道的子卷积核矩阵,s(i,j)为卷积核W对应的输出矩阵的对应位置元素的值,i和j分别表示行和列,b表示偏置。Among them, X and W represent the overall input and weight respectively, n_in represents the total number of channels of the input data, Xk is the input matrix of the kth channel, Wk is the subconvolution kernel matrix corresponding to the kth channel, s(i,j ) is the value of the corresponding position element of the output matrix corresponding to the convolution kernel W, i and j represent the row and column respectively, and b represents the bias.

通过卷积操作后得到多维特征图,采用最大池化进行特征图的简化与降维,扩大感受野同时防止过拟合;采用Relu激活函数完成非线性变换。接着,为增强网络鲁棒性,采用Dropout策略随机丢弃部分神经元节点增加随机性,使得计算结果包含更多随机结构。最后,为获取图像的一维关键特征,对上述结果特征图进行平铺展开得到方位n图像的初特征hn,公式如下:The multi-dimensional feature map is obtained after the convolution operation, and the maximum pooling is used to simplify and reduce the dimension of the feature map, expand the receptive field and prevent over-fitting; the Relu activation function is used to complete the nonlinear transformation. Next, in order to enhance the robustness of the network, the Dropout strategy is used to randomly discard some neuron nodes to increase the randomness, so that the calculation result contains more random structures. Finally, in order to obtain the one-dimensional key features of the image, the above result feature map is tiled and expanded to obtain the initial feature hn of the orientation n image. The formula is as follows:

hn=Shared_model(Imagen) (8)hn =Shared_model(Imagen ) (8)

其中,Shared_model()表示CNN模型。Among them, Shared_model() represents the CNN model.

基于共享CNN模型的特征提取实质上是对以相同权重的模型对同一位置点的多方位图像数据分别进行位置相关性的数据压缩与统一特征提取过程,其中共享性使得在方位信息缺失的情况下依然可以提取图像有效特征用于位置特征的表征,消除了特征提取阶段的方位差异性,增强了各方位图像特征的位置相关性。The feature extraction based on the shared CNN model is essentially the process of data compression and unified feature extraction for the multi-directional image data of the same location point with the same weight model. The effective features of the image can still be extracted for the representation of position features, which eliminates the azimuth difference in the feature extraction stage and enhances the positional correlation of image features in various azimuths.

在共享CNN模型对各方位特征进行提取后,网络需要采取融合策略对多通道的方位特征图进行信息整合。特征图融合策略包含Contract拼接与Add相加两种。拼接方式常用于将特征联合或者输出层信息融合,主要实现通道数的合并,即特征维度数增加但各维度下的信息量不变。叠加方式主要完成信息之间的叠加,在特征维度不变的情况下增加各维度的信息量。针对单方位图像的特征图信息,contract级联拼接具有数据维度较高、冗余信息较多且对方位拼接顺序的依赖较大的问题,因此本实施例采用Add叠加策略将不同方位图像中提取出的特征逐元素叠加实现无序融合。多方位图像的特征图h可以表示为:After the shared CNN model extracts the azimuth features, the network needs to adopt a fusion strategy to integrate the information of the multi-channel azimuth feature maps. The feature map fusion strategy includes two types: Contract splicing and Add. The splicing method is often used to combine features or fuse output layer information, mainly to combine the number of channels, that is, the number of feature dimensions increases but the amount of information in each dimension remains unchanged. The superposition method mainly completes the superposition of information, and increases the amount of information in each dimension while the feature dimension remains unchanged. For the feature map information of unidirectional images, contract cascade stitching has the problems of higher data dimension, more redundant information, and greater dependence on the order of azimuth stitching. Therefore, in this embodiment, the Add stacking strategy is used to extract images from different azimuths. The resulting features are superimposed element by element to achieve disordered fusion. The feature map h of a multi-directional image can be expressed as:

Figure BDA0003112778440000121
Figure BDA0003112778440000121

其中,m为方位数,n为单方位图像的特征维度,hi表示方位i的特征图,hi,j表示方位i的特征维度j上的元素,i取1到m的整数,j取1到n的整数。Among them, m is the number of orientations, n is the feature dimension of a single orientation image, hi represents the feature map of orientation i, hi, jrepresent the elements on the feature dimension j of orientation i, i is an integer from 1 to m, and j is Integer from 1 to n.

基于叠加信息的融合特征yIMA可以表示为:The fusion feature yIMA based on superimposed information can be expressed as:

yIMA=[r1 r2…ri…rn] (10)yIMA = [r1 r2 …ri …rn ] (10)

其中,

Figure BDA0003112778440000122
in,
Figure BDA0003112778440000122

为获得两种异源数据特征的融合表征,采用基本感知层作为特征融合层,将CSI幅度特征与多方位图像特征拼接后利用全连接与激活函数实现对融合表征映射过程的拟合,通过对两种特征的不同维度赋予各自的权重参数实现特征域的线性变换、通过激活函数实现特征域的非线性变换,从而构建最终的异构特征融合表征域,计算公式如下:In order to obtain the fusion representation of two heterogeneous data features, the basic perception layer is used as the feature fusion layer. The different dimensions of the two features are assigned respective weight parameters to realize the linear transformation of the feature domain, and the nonlinear transformation of the feature domain is realized through the activation function, so as to construct the final heterogeneous feature fusion representation domain. The calculation formula is as follows:

Figure BDA0003112778440000123
Figure BDA0003112778440000123

其中,

Figure BDA0003112778440000124
为在第k个指纹点上的异构参数空间特征向量中第m个元素,M为异构特征空间向量的维度,yiCSI,k和yjIMA,k分别为第k个指纹点的信道状态信息特征向量的i维度和多方位图像特征向量的j维度,NCSI和NIMA分别为这两个特征向量的维度,c、
Figure BDA0003112778440000125
为可调参数。in,
Figure BDA0003112778440000124
is the mth element in the heterogeneous parameter space feature vector on the kth fingerprint point, M is the dimension of the heterogeneous feature space vector, yiCSI,k and yjIMA,k are the kth fingerprint point respectively. The i dimension of the channel state information feature vector and the j dimension of the multi-directional image feature vector, NCSI and NIMA are the dimensions of these two feature vectors respectively, c,
Figure BDA0003112778440000125
is a tunable parameter.

为实现高区分度融合指纹表征,本实施例依据异构融合特征空间的测度指标,实现对融合表征网络模型的参数优化。In order to realize high-discrimination fusion fingerprint representation, this embodiment realizes parameter optimization of the fusion representation network model according to the measurement index of the heterogeneous fusion feature space.

高质量的融合指纹特征应在表征域内满足以下条件:相同位置点的特征指纹距离近,不同位置点的特征指纹距离远。依据此原则,本实施例利用经典欧氏距离进行融合表征间的基本距离度量计算,并在此基础上定义整个融合指纹库的测度指标。图3是本发明一实施例中的融合表征模型的参数优化框架结构示意图,参见图3,将所有融合指纹按三元组进行划分,每份三元组包含锚样本va、与锚样本位置相同的正样本vp、与锚样本位置不同的负样本vn,为了减小同位置点指纹距离、增大不同位置点指纹距离,融合指纹库的测度指标D可以表示为:High-quality fused fingerprint features should meet the following conditions in the representation domain: the feature fingerprints of the same location point are close, and the feature fingerprints of different locations are far away. According to this principle, in this embodiment, the classical Euclidean distance is used to calculate the basic distance measurement between the fusion representations, and on this basis, the measurement indicators of the entire fusion fingerprint database are defined. FIG. 3 is a schematic diagram of a parameter optimization framework structure of a fusion characterization model in an embodiment of the present invention. Referring to FIG. 3 , all fusion fingerprints are divided into triples, and each triple includes an anchor sample va , and an anchor sample position For the same positive sample vp and the negative sample vn different from the anchor sample, in order to reduce the fingerprint distance of the same position point and increase the fingerprint distance of different position points, the measurement index D of the fusion fingerprint library can be expressed as:

Figure BDA0003112778440000131
Figure BDA0003112778440000131

其中,

Figure BDA0003112778440000132
分别为第i份三元组中的锚样本位置点指纹、正样本位置点指纹、负样本位置点指纹,α为负样本对距离与正样本对距离区分的最小阈值,N为锚样本和正负样本对构成的三元组样本总数。in,
Figure BDA0003112778440000132
are the anchor sample location point fingerprints, positive sample location point fingerprints, and negative sample location point fingerprints in the i-th triplet, respectively, α is the minimum threshold for distinguishing the distance between the negative sample pair and the positive sample pair, and N is the anchor sample and positive sample pair distance. The total number of triplet samples formed by pairs of negative samples.

异构特征融合表征域的测度指标可以从理论上定量描述融合表征域指纹的特征区分度,并进一步指导融合表征参数的优化。融合表征模型的参数包含多层感知机、共享CNN以及特征融合层中的各权重与阈值,各参数代表特征向量中每一个维度的信息对于融合表征域的贡献,优化的过程选取适当的参数使得区分度高的特征信息在融合表征中具有相对较高的贡献。异构特征空间测度指标与指纹库中不同位置处指纹的差异度成正比,与相同位置处指纹的差异度成反比,高区分度指纹库构建的目标应最大化指纹库测度指标。因此,选取测度指标D的相反数作为最小化的目标函数L:The measurement index of the heterogeneous feature fusion representation domain can theoretically quantitatively describe the feature discrimination degree of the fusion representation domain fingerprint, and further guide the optimization of the fusion representation parameters. The parameters of the fusion representation model include the multi-layer perceptron, the shared CNN, and the weights and thresholds in the feature fusion layer. Each parameter represents the contribution of the information of each dimension in the feature vector to the fusion representation domain. The optimization process selects appropriate parameters so that The feature information with high discriminative degree has a relatively high contribution in the fusion representation. The measurement index of heterogeneous feature space is proportional to the degree of difference of fingerprints at different positions in the fingerprint database, and inversely proportional to the degree of difference of fingerprints at the same position. The goal of constructing a high-discrimination fingerprint database should maximize the measurement index of the fingerprint database. Therefore, the inverse of the measure index D is selected as the minimized objective function L:

Figure BDA0003112778440000133
Figure BDA0003112778440000133

优化算法采用Adam算法,更新公式如下:The optimization algorithm adopts Adam algorithm, and the update formula is as follows:

Figure BDA0003112778440000134
Figure BDA0003112778440000134

其中,t表示次数,wt表示第t次更新后的权重参数,wt-1表示第t-1次更新后的权重参数,α和ε表示可设置的参数,

Figure BDA0003112778440000135
为mt的纠正,
Figure BDA0003112778440000136
为vt的纠正。Among them, t represents the number of times, wt represents the weight parameter after the t-th update, wt-1 represents the weight parameter after the t-1th update, α and ε represent the parameters that can be set,
Figure BDA0003112778440000135
is the correction of mt ,
Figure BDA0003112778440000136
Correction for vt .

Figure BDA0003112778440000137
Figure BDA0003112778440000137

其中,

Figure BDA0003112778440000138
Figure BDA0003112778440000139
均为第t次更新后的控制指数衰减的常量,mt为第t次更新后的梯度的指数移动均值(通过梯度的一阶矩求得),vt是第t次更新后的平方梯度(通过梯度的二阶矩求得)。mt、vt的更新公式如下:in,
Figure BDA0003112778440000138
and
Figure BDA0003112778440000139
Both are constants that control the exponential decay after the t-th update, mt is the exponential moving average of the gradient after the t-th update (obtained by the first moment of the gradient), and vt is the squared gradient after the t-th update. (obtained by the second moment of the gradient). The update formulas of mt and vt are as follows:

mt=β1mt-1+(1-β1)gt (16)mt1 mt-1 +(1-β1 )gt (16)

Figure BDA0003112778440000141
Figure BDA0003112778440000141

其中gt是指第t次更新后的一阶导数。where gt refers to the first derivative after the t-th update.

上式中的各参数可默认设置为:α=0.001,β1=0.9,β2=0.999,ε=10-8The parameters in the above formula can be set as default: α=0.001, β1 =0.9, β2 =0.999, ε=10−8 .

本实施例的CSI与图像的融合表征网络,其中采用共享CNN与叠加策略完成多方位图像的特征向量提取,采用多层感知机完成CSI幅值特征提取,最后对两种特征拼接后进行融合表征转换,以融合指纹的测度指标为优化目标对网络进行参数训练,获得最终的高区分度融合表征网络模型。通过信道状态信息和多方位图像的融合表征网络,实现了异源异构定位数据的融合指纹表征,有效地丰富指纹信息、增强指纹区分度,从而提高定位精度。In the fusion characterization network of CSI and image in this embodiment, the feature vector extraction of multi-directional images is completed by using shared CNN and superposition strategy, the CSI amplitude feature extraction is completed by using multilayer perceptron, and finally the fusion characterization is performed after the two features are spliced. Transform, take the measurement index of the fusion fingerprint as the optimization target to train the parameters of the network, and obtain the final high-discrimination fusion representation network model. Through the fusion representation network of channel state information and multi-directional images, the fusion fingerprint representation of heterogeneous positioning data is realized, which effectively enriches fingerprint information, enhances fingerprint discrimination, and improves positioning accuracy.

在本说明书的描述中,参考术语“一个实施例”、“一个具体实施例”、“一些实施例”、“例如”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。各实施例中涉及的步骤顺序用于示意性说明本发明的实施,其中的步骤顺序不作限定,可根据需要作适当调整。In the description of this specification, reference to the terms "one embodiment", "one specific embodiment", "some embodiments", "for example", "example", "specific example", or "some examples", etc. Indicates that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. The sequence of steps involved in each embodiment is used to schematically illustrate the implementation of the present invention, and the sequence of steps therein is not limited, and can be appropriately adjusted as required.

本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.

本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block in the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.

这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.

以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限定本发明的保护范围,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The specific embodiments described above further describe the purpose, technical solutions and beneficial effects of the present invention in detail. It should be understood that the above-mentioned specific embodiments are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.

Claims (10)

Translated fromChinese
1.一种融合表征网络模型训练方法,其特征在于,包括:1. a fusion representation network model training method, is characterized in that, comprises:获取训练样本集,每个训练样本包括同一环境位置对应的信道状态信息数据和多个方位的图像数据;Obtain a training sample set, each training sample includes channel state information data corresponding to the same environmental location and image data of multiple orientations;利用多层感知机网络对训练样本中的信道状态信息数据进行特征提取,得到相应训练样本对应的信道状态信息特征图;The multi-layer perceptron network is used to perform feature extraction on the channel state information data in the training samples, and the channel state information feature maps corresponding to the corresponding training samples are obtained;利用权重相同的卷积神经网络分别对训练样本中的各方位的图像数据进行特征提取,得到相应训练样本对应的各方位的图像的特征图;Use the same weighted convolutional neural network to perform feature extraction on the image data of each orientation in the training sample respectively, and obtain the feature map of the image of each orientation corresponding to the corresponding training sample;对同一训练样本对应的所有方位的图像的特征图进行融合,得到相应训练样本对应的多方位特征图;Fusion of the feature maps of the images of all orientations corresponding to the same training sample to obtain multi-directional feature maps corresponding to the corresponding training samples;利用特征融合层对同一训练样本对应的信道状态信息特征图和多方位特征图进行拼接后构建相应训练样本对应的信道状态信息和图像的融合表征;The channel state information feature map and the multi-directional feature map corresponding to the same training sample are spliced by the feature fusion layer, and the fusion representation of the channel state information and the image corresponding to the corresponding training sample is constructed;利用各训练样本对应的信道状态信息和图像的融合表征对应的特征指纹构建融合指纹库,基于融合指纹库并利用设定测度指标对包含所述多层感知机网络、所述卷积神经网络及所述特征融合层的网络模型进行参数优化,以使同一环境位置的特征指纹之间的距离近及使不同环境位置的特征指纹之间的距离远,从而得到训练后的网络模型,作为融合表征网络模型;其中,设定测度指标用于度量融合指纹库中特征指纹之间的距离。A fusion fingerprint database is constructed by using the channel state information corresponding to each training sample and the characteristic fingerprint corresponding to the fusion representation of the image. The network model of the feature fusion layer is optimized for parameters, so that the distance between the feature fingerprints of the same environmental location is close and the distance between the feature fingerprints of different environmental locations is far, so as to obtain the network model after training, as the fusion representation The network model; wherein, the measurement index is set to measure the distance between the feature fingerprints in the fusion fingerprint database.2.如权利要求1所述的融合表征网络模型训练方法,其特征在于,训练样本中的信道状态信息数据为信道状态幅度数据。2 . The method for training a fusion representation network model according to claim 1 , wherein the channel state information data in the training samples is channel state amplitude data. 3 .3.如权利要求1所述的融合表征网络模型训练方法,其特征在于,对同一训练样本对应的所有方位的图像的特征图进行融合,得到相应训练样本对应的多方位特征图,包括:3. fusion representation network model training method as claimed in claim 1, it is characterised in that the feature maps of the images of all orientations corresponding to the same training sample are fused to obtain multi-directional feature maps corresponding to the corresponding training samples, comprising:通过对同一训练样本对应的所有方位的图像的特征图进行叠加融合,得到相应训练样本对应的多方位特征图。The multi-directional feature maps corresponding to the corresponding training samples are obtained by superimposing and fusing the feature maps of the images of all orientations corresponding to the same training sample.4.如权利要求1所述的融合表征网络模型训练方法,其特征在于,利用权重相同的卷积神经网络分别对训练样本中的各方位的图像数据进行特征提取,得到相应训练样本对应的各方位的图像的特征图,包括:4. The fusion representation network model training method as claimed in claim 1, characterized in that, using the same weighted convolutional neural network to perform feature extraction on the image data of each position in the training sample, respectively, to obtain each corresponding training sample. A feature map of the azimuth image, including:利用权重相同的卷积神经网络分别对训练样本中的各方位的图像数据进行一维关键特征提取,得到相应训练样本对应的各方位的图像的特征图。Using a convolutional neural network with the same weight, one-dimensional key feature extraction is performed on the image data of each orientation in the training sample, and a feature map of the image in each orientation corresponding to the corresponding training sample is obtained.5.如权利要求4所述的融合表征网络模型训练方法,其特征在于,所述卷积神经网络包括:卷积层、池化层及平铺层;5. The method for training a fusion representation network model according to claim 4, wherein the convolutional neural network comprises: a convolution layer, a pooling layer and a tiling layer;利用权重相同的卷积神经网络分别对训练样本中的各方位的图像数据进行一维关键特征提取,得到相应训练样本对应的各方位的图像的特征图,包括:Use the same weighted convolutional neural network to extract one-dimensional key features of the image data of each orientation in the training sample, and obtain the feature map of the image corresponding to each orientation of the training sample, including:利用卷积层对训练样本中的每个方位的图像数据进行卷积操作,得到多维特征图;Use the convolution layer to perform convolution operation on the image data of each orientation in the training sample to obtain a multi-dimensional feature map;利用池化层对多维特征图进行最大池化操作,以得到简化及降维后的特征图;Use the pooling layer to perform a maximum pooling operation on the multi-dimensional feature map to obtain a simplified and reduced-dimensional feature map;依次利用ReLu激活函数对简化及降维后的特征图进行非线性变换,利用Dropout策略随机丢弃部分神经元节点,以及利用平铺层进行平铺展开,得到相应训练样本对应的各方位的图像的特征图。In turn, the ReLu activation function is used to perform nonlinear transformation on the simplified and dimensionally reduced feature maps, the dropout strategy is used to randomly discard some neuron nodes, and the tiling layer is used for tiling to obtain the corresponding training samples. feature map.6.如权利要求1所述的融合表征网络模型训练方法,其特征在于,6. fusion representation network model training method as claimed in claim 1, is characterized in that,利用各训练样本对应的信道状态信息和图像的融合表征对应的特征指纹构建融合指纹库,包括:The fusion fingerprint library is constructed by using the channel state information corresponding to each training sample and the characteristic fingerprint corresponding to the fusion representation of the image, including:将各训练样本对应的信道状态信息和图像的融合表征对应的特征指纹按三元组进行划分,从而形成融合指纹库;其中,每个三元组包含特征指纹锚样本、与特征指纹锚样本位置相同的特征指纹正样本、及与特征指纹锚样本位置不同的特征指纹负样本;The channel state information corresponding to each training sample and the feature fingerprint corresponding to the fusion representation of the image are divided into triples to form a fusion fingerprint library; wherein each triplet contains a feature fingerprint anchor sample, and the location of the feature fingerprint anchor sample The same positive sample of feature fingerprint, and the negative sample of feature fingerprint that is different from the anchor sample of feature fingerprint;基于融合指纹库并利用设定测度指标对包含所述多层感知机网络、所述卷积神经网络及所述特征融合层的网络模型进行参数优化,包括:Based on the fusion fingerprint database and using the set measurement index, the parameters of the network model including the multi-layer perceptron network, the convolutional neural network and the feature fusion layer are optimized, including:以设定测度指标的相反数作为最小化的目标函数,利用Adam算法并基于融合指纹库对包含所述多层感知机网络、所述卷积神经网络及所述特征融合层的网络模型进行参数优化。Taking the opposite number of the set measurement index as the objective function of minimization, using Adam algorithm and based on the fusion fingerprint library to parameterize the network model including the multi-layer perceptron network, the convolutional neural network and the feature fusion layer optimization.7.如权利要求6所述的融合表征网络模型训练方法,其特征在于,7. fusion representation network model training method as claimed in claim 6, is characterized in that,所述目标函数表示为:The objective function is expressed as:
Figure FDA0003112778430000021
Figure FDA0003112778430000021
其中,L表示目标函数的值,D表示设定测度指标,N为三元组的总数,via、vip、vin分别表示特征指纹锚样本、特征指纹正样本、特征指纹负样本,α表示可调参数。Among them, L represents the value of the objective function, D represents the set measurement index, N represents the total number of triples, via , vip , and vin represent the characteristic fingerprint anchor sample, the characteristic fingerprint positive sample, and the characteristic fingerprint negative respectively. sample, α represents a tunable parameter.8.一种融合特征指纹表征方法,其特征在于,包括:8. A fusion feature fingerprint characterization method, characterized in that, comprising:获取设定环境位置处的信道状态信息数据和多个方位的图像数据;Obtain the channel state information data and the image data of multiple orientations at the set environmental position;利用如权利要求1至7任一项所述方法训练得到的融合表征网络模型对信道状态信息数据和多个方位的图像数据进行处理,得到设定环境位置处的特征指纹。The channel state information data and the image data of multiple azimuths are processed by using the fusion representation network model trained by the method according to any one of claims 1 to 7 to obtain the characteristic fingerprint at the set environment position.9.一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现如权利要求1至8任一项所述方法的步骤。9. An electronic device, comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor implements any one of claims 1 to 8 when the processor executes the program the steps of the method described in item.10.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1至8任一项所述方法的步骤。10. A computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the steps of the method according to any one of claims 1 to 8 are implemented.
CN202110655987.0A2021-06-112021-06-11Fusion characterization network model training method, fingerprint characterization method and equipment thereofActiveCN113343863B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110655987.0ACN113343863B (en)2021-06-112021-06-11Fusion characterization network model training method, fingerprint characterization method and equipment thereof

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110655987.0ACN113343863B (en)2021-06-112021-06-11Fusion characterization network model training method, fingerprint characterization method and equipment thereof

Publications (2)

Publication NumberPublication Date
CN113343863Atrue CN113343863A (en)2021-09-03
CN113343863B CN113343863B (en)2023-01-03

Family

ID=77476739

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110655987.0AActiveCN113343863B (en)2021-06-112021-06-11Fusion characterization network model training method, fingerprint characterization method and equipment thereof

Country Status (1)

CountryLink
CN (1)CN113343863B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114240843A (en)*2021-11-182022-03-25支付宝(杭州)信息技术有限公司Image detection method and device and electronic equipment
CN114268918A (en)*2021-11-122022-04-01北京航空航天大学Indoor CSI fingerprint positioning method for rapid off-line library building
CN115131619A (en)*2022-08-262022-09-30北京江河惠远科技有限公司Extra-high voltage part sorting method and system based on point cloud and image fusion
CN115965884A (en)*2021-10-122023-04-14成都极米科技股份有限公司 A fall event detection method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110933628A (en)*2019-11-262020-03-27西安电子科技大学 Fingerprint indoor positioning method based on twin network
WO2020181685A1 (en)*2019-03-122020-09-17南京邮电大学Vehicle-mounted video target detection method based on deep learning
CN112040400A (en)*2020-08-252020-12-04西安交通大学Single-site indoor fingerprint positioning method based on MIMO-CSI, storage medium and equipment
CN112712557A (en)*2020-12-172021-04-27上海交通大学Super-resolution CIR indoor fingerprint positioning method based on convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2020181685A1 (en)*2019-03-122020-09-17南京邮电大学Vehicle-mounted video target detection method based on deep learning
CN110933628A (en)*2019-11-262020-03-27西安电子科技大学 Fingerprint indoor positioning method based on twin network
CN112040400A (en)*2020-08-252020-12-04西安交通大学Single-site indoor fingerprint positioning method based on MIMO-CSI, storage medium and equipment
CN112712557A (en)*2020-12-172021-04-27上海交通大学Super-resolution CIR indoor fingerprint positioning method based on convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEN LIU 等: "C-Map: Hyper-Resolution Adaptive Preprocessing System for CSI Amplitude-Based Fingerprint Localization", 《IEEE ACCESS》*
WEN LIU 等: "Multi-Scene Doppler Power Spectrum Modeling", 《IEEE ACCESS》*

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115965884A (en)*2021-10-122023-04-14成都极米科技股份有限公司 A fall event detection method, device, equipment and storage medium
CN114268918A (en)*2021-11-122022-04-01北京航空航天大学Indoor CSI fingerprint positioning method for rapid off-line library building
CN114268918B (en)*2021-11-122022-10-18北京航空航天大学Indoor CSI fingerprint positioning method for rapid off-line library building
CN114240843A (en)*2021-11-182022-03-25支付宝(杭州)信息技术有限公司Image detection method and device and electronic equipment
CN115131619A (en)*2022-08-262022-09-30北京江河惠远科技有限公司Extra-high voltage part sorting method and system based on point cloud and image fusion
CN115131619B (en)*2022-08-262022-11-22北京江河惠远科技有限公司Extra-high voltage part sorting method and system based on point cloud and image fusion

Also Published As

Publication numberPublication date
CN113343863B (en)2023-01-03

Similar Documents

PublicationPublication DateTitle
CN113343863A (en)Fusion characterization network model training method, fingerprint characterization method and equipment thereof
US11195051B2 (en)Method for person re-identification based on deep model with multi-loss fusion training strategy
CN109800689B (en)Target tracking method based on space-time feature fusion learning
Tvoroshenko et al.Analysis of existing methods for searching object in the video stream
CN111382686B (en)Lane line detection method based on semi-supervised generation confrontation network
CN109598268A (en)A kind of RGB-D well-marked target detection method based on single flow depth degree network
CN113095370A (en)Image recognition method and device, electronic equipment and storage medium
JP7136500B2 (en) Pedestrian Re-identification Method for Random Occlusion Recovery Based on Noise Channel
CN111027421A (en)Graph-based direct-push type semi-supervised pedestrian re-identification method
CN113239801B (en)Cross-domain action recognition method based on multi-scale feature learning and multi-level domain alignment
CN113034545A (en)Vehicle tracking method based on CenterNet multi-target tracking algorithm
CN107529650A (en)Network model construction and closed loop detection method, corresponding device and computer equipment
CN105825235A (en)Image identification method based on deep learning of multiple characteristic graphs
CN108961308B (en) A Residual Depth Feature Target Tracking Method for Drift Detection
CN113298024A (en)Unmanned aerial vehicle ground small target identification method based on lightweight neural network
CN116385981B (en) A vehicle re-identification method and device guided by camera topology map
CN111008991A (en) A Background-aware Correlation Filtering Target Tracking Method
CN113887353A (en)Visible light-infrared pedestrian re-identification method and system
CN114155489A (en) A multi-device collaborative drone pilot detection method, device and storage medium
CN117253161A (en)Remote sensing image depth recognition method based on feature correction and multistage countermeasure defense
CN119398645A (en) Intelligent cargo tracking method, device, equipment and storage medium for transport agent
CN110472457A (en)Low-resolution face image identification, restoring method, equipment and storage medium
WangExploring intelligent image recognition technology of football robot using omnidirectional vision of internet of things
US11354535B2 (en)Device and method with sensor-specific image recognition
CN110349176A (en)Method for tracking target and system based on triple convolutional networks and perception interference in learning

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp