Movatterモバイル変換


[0]ホーム

URL:


CN119513612B - Training method of graph neural localization model, method and device for localizing target object - Google Patents

Training method of graph neural localization model, method and device for localizing target object
Download PDF

Info

Publication number
CN119513612B
CN119513612BCN202510072849.8ACN202510072849ACN119513612BCN 119513612 BCN119513612 BCN 119513612BCN 202510072849 ACN202510072849 ACN 202510072849ACN 119513612 BCN119513612 BCN 119513612B
Authority
CN
China
Prior art keywords
channel state
graph
undirected
training
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510072849.8A
Other languages
Chinese (zh)
Other versions
CN119513612A (en
Inventor
陈彦
张田雨
孙启彬
吴枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Artificial Intelligence of Hefei Comprehensive National Science Center
Original Assignee
Institute of Artificial Intelligence of Hefei Comprehensive National Science Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Artificial Intelligence of Hefei Comprehensive National Science CenterfiledCriticalInstitute of Artificial Intelligence of Hefei Comprehensive National Science Center
Priority to CN202510072849.8ApriorityCriticalpatent/CN119513612B/en
Publication of CN119513612ApublicationCriticalpatent/CN119513612A/en
Application grantedgrantedCritical
Publication of CN119513612BpublicationCriticalpatent/CN119513612B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The application provides a training method of a graph nerve positioning model, a positioning method and a positioning device of a target object, wherein the method comprises the steps of obtaining a first training set and a second training set, wherein the first training set comprises a plurality of first channel state undirected graphs, and the second training set comprises a plurality of second channel state undirected graphs and a position label of the target object corresponding to each second channel state undirected graph; and training an initial positioning model by using a plurality of second channel state undirected graphs and a plurality of position labels to obtain a graph neural positioning model, wherein the initial positioning model comprises a plurality of independent intermediate graph neural networks.

Description

Training method of graphic nerve positioning model, and positioning method and device of target object
Technical Field
The present application relates to the technical field of neural networks, and more particularly, to a training method of a graph neural positioning model, a positioning method of a target object, a training apparatus of a graph neural positioning model, a positioning apparatus of a target object, an electronic device, a computer readable storage medium, and a computer program product.
Background
Indoor positioning systems (Indoor Positioning System, IPS) aim to provide accurate positions of people or objects for indoor environments where global positioning systems (Global Positioning System, GPS) and other satellite positioning technologies lack accuracy or fail entirely. IPS is an important basic task and has significant value in the fields of business, military, retail, inventory tracking, etc. However, the existing indoor positioning technology still has problems that the vision-based indoor positioning technology is easily affected by illumination conditions and has serious privacy problems, and the radar-based indoor positioning technology is high in deployment cost. In contrast, widely deployed commercial WiFi devices are more cost effective, providing an important idea for indoor positioning systems. Channel State Information (CSI) generated by commercial WiFi devices can provide detailed Information of signal propagation paths, including multipath effects, scattering, fading, etc., which makes CSI have great potential in fine-grained indoor positioning.
Disclosure of Invention
In view of this, the present application provides a method of training a neural localization model, a method of locating a target object, a training apparatus of a neural localization model, a locating apparatus of a target object, an electronic device, a computer-readable storage medium, and a computer program product.
One aspect of the present application provides a training method of a graph nerve positioning model, comprising:
acquiring a first training set and a second training set, wherein the first training set comprises a plurality of first channel state undirected graphs, and the second training set comprises a plurality of second channel state undirected graphs and a position label of a target object corresponding to each second channel state undirected graph;
pre-training the initial graph neural network by using a plurality of the first channel state undirected graphs to obtain an intermediate graph neural network;
Training an initial positioning model by using a plurality of second channel state undirected graphs and a plurality of position labels to obtain the graph neural positioning model, wherein the initial positioning model comprises a plurality of independent intermediate graph neural networks.
Another aspect of the present application provides a method for locating a target object, including
Under the condition that the transmitting end of the target object interacts with a plurality of wireless devices, acquiring target channel state matrixes generated by the plurality of wireless devices;
generating a target channel state undirected graph according to the target channel state matrixes aiming at each target channel state matrix;
inputting the target channel state undirected graph into a graph nerve positioning model to obtain a plurality of initial position prediction sets;
and carrying out weighted average processing on the plurality of initial position prediction sets to obtain the target position of the target object.
According to an embodiment of the present application, the initial position prediction set includes an abscissa prediction mean, an abscissa prediction variance, an ordinate prediction mean, and an ordinate prediction variance.
Another aspect of the present application provides a training apparatus for a graphic nerve positioning model, comprising:
The first acquisition module is used for acquiring a first training set and a second training set, wherein the first training set comprises a plurality of first channel state undirected graphs, and the second training set comprises a plurality of second channel state undirected graphs and position labels of target objects corresponding to each second channel state undirected graph;
the pre-training module is used for pre-training the initial graph neural network by utilizing a plurality of the first channel state undirected graphs to obtain an intermediate graph neural network;
And the target training module is used for training an initial positioning model by utilizing the plurality of second channel state undirected graphs and the plurality of position labels to obtain the graph neural positioning model, wherein the initial positioning model comprises a plurality of independent intermediate graph neural networks.
Another aspect of the present application provides a positioning apparatus for a target object, including:
the second acquisition module is used for acquiring target channel state matrixes generated by a plurality of wireless devices under the condition that the transmitting end of the target object interacts with the plurality of wireless devices;
the generating module is used for generating a target channel state undirected graph according to the target channel state matrixes for each target channel state matrix;
the prediction module is used for inputting the target channel state undirected graph into a graph nerve positioning model to obtain a plurality of initial position prediction sets;
And the obtaining module is used for carrying out weighted average processing on the plurality of initial position prediction sets to obtain the target position of the target object.
Another aspect of the present application provides an electronic device, comprising:
One or more processors;
A memory for storing one or more programs,
Wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the methods as described above.
Another aspect of the application provides a computer readable storage medium storing computer executable instructions that when executed are to implement a method as described above.
Another aspect of the application provides a computer program product comprising computer executable instructions which when executed are for implementing a method as described above.
According to the embodiment of the application, the training is performed by using the first channel state undirected graph, and then the training is performed on the pre-trained intermediate graph neural network by using the second channel state undirected graph and the position label, so that the graph neural positioning model capable of being used for positioning the target object is obtained. Because the high flexibility of the graph structure is fully utilized, a general graph nerve positioning model facing to the channel state information of the actual commercial scene is realized. Meanwhile, model pre-training learning is carried out through a large number of unlabeled first channel state undirected graphs, positioning robustness of the graph neural positioning model is improved, an uncertainty learning strategy is introduced in pre-training to cope with complex and changeable environments in practical application, and positioning stability and reliability of the graph neural positioning model are further improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent from the following description of embodiments of the present application with reference to the accompanying drawings, in which:
FIG. 1 illustrates an exemplary system architecture for a training method to which a graph neural localization model may be applied or a localization method of a target object, in accordance with an embodiment of the present application;
FIG. 2 illustrates a flow chart of a method of training the neural localization model, in accordance with an embodiment of the present application;
FIG. 3 shows a flow chart of a method of locating a target object according to an embodiment of the application;
Fig. 4 shows a schematic view of a building level indoor positioning scenario according to a first embodiment of the present application;
FIG. 5 shows a schematic view of a data set point location acquired by a building level indoor location according to a first embodiment of the application;
FIG. 6 shows a 50 th (median) and 90 th (tail) percentile error score and floor accuracy graph of various apparatuses and methods according to a first embodiment of the present application;
FIG. 7 shows a schematic view of a data set point location acquired by building level indoor positioning according to a second embodiment of the application;
FIG. 8 shows a schematic view of a data set point location acquired by building level indoor positioning according to a second embodiment of the application;
FIG. 9 illustrates a block diagram of a training apparatus of the neural localization model, according to an embodiment of the present application;
FIG. 10 shows a block diagram of a target object positioning apparatus according to an embodiment of the application, and
Fig. 11 shows a block diagram of an electronic device adapted to implement the method described above, according to an embodiment of the application.
Detailed Description
Hereinafter, embodiments of the present application will be described with reference to the accompanying drawings. It should be understood that the description is only illustrative and is not intended to limit the scope of the application. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the application. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the present application.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a convention should be interpreted in accordance with the meaning of one of skill in the art having generally understood the convention (e.g., "a system having at least one of A, B and C" would include, but not be limited to, systems having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
In an indoor environment, a commercial WiFi router is deployed as a receiving end (Access Point, AP), that is, a wireless device, an antenna spacing and an antenna type of the receiving end. The WiFi router is controlled by a central processor and an AC (radio access controller) cluster, and Channel State Information (CSI) measurement is performed while ensuring normal communication. The representation mode of the channel state information of the user terminal equipment after being received by the receiving end is as follows, the receiving signal matrix of the receiving end is set asIts relationship to the transmitting side input signal can be described by the following matrix equation: Wherein: is a physical channel matrix of which the channel is a physical channel,Indicating the number of receive antennas and,AndIs a cyclic shift matrix and a spatial mapping matrix at the transmitting end,Is a matrix of transmitted signals.
Channel State Information (CSI) matrix measured at receiving endExpressed as: WhereinIs the matrix product at the transmitting end. Based on the model, the CSI matrix reported by the receiving endRepresented as a three-dimensional matrix with dimensions ofEach of which is provided withComponent(s)Corresponding to one subcarrier.
To explore a positioning scenario in a complex real indoor environment. In this configuration, the user equipment (such as a mobile phone, a PC or a notebook computer) is used as a transmitting terminal to form a setWhile the Access Points (AP) installed in the whole facility are used as receiving ends to form a set
For convenience of explanation, a basic scenario is defined in which the transmitting endIn a specific time windowInner and multiple receiving endsInteractions are performed, the time window being in time of dayCentered, for one second. In the time windowAnd (3) the transmitting end performs channel estimation with a plurality of nearby APs. Each AP then transmits estimated Channel State Information (CSI) for each training sequence, noted asWhereinTypically representing diversity or multiplexing modes. The total number of the involved receiving ends is determined byAnd (3) representing.
The process of converting data collected by multiple nearby APs into a three-dimensional vector representing the location of the transmitting end is referred to as a "locating event" (LocEvent), noted as. To maintain reliability of positioning results, all events involving less than three APs may be excluded, i.e., conditions are metIs the case in (a).
Processing data of the above-described scenario using existing positioning methods for positioning presents a number of challenges. First, the conventional CSI fingerprint positioning scheme generally depends on the vector coding mode of the euclidean space, and in an actual commercial scenario, the coding mode often fails due to the heterogeneity of the receiving end device and the diversity of the communication modes. Second, large-scale unlabeled CSI data presents difficulties in practical use, and how to effectively use such data to improve positioning performance has not yet been fully addressed. Furthermore, the complex and varied conditions in an actual deployment environment, how to maintain the robustness of a positioning scheme under such conditions remains a major challenge limiting the wide application of existing systems in an actual environment.
In view of the above, the embodiments of the present application provide a training method of a graph neural positioning model, a positioning method of a target object, and a device thereof, where the method includes obtaining a first training set and a second training set, where the first training set includes a plurality of first channel state undirected graphs, the second training set includes a plurality of second channel state undirected graphs and a position tag of the target object corresponding to each of the second channel state undirected graphs, pre-training an initial graph neural network using the plurality of first channel state undirected graphs to obtain an intermediate graph neural network, and training the initial positioning model using the plurality of second channel state undirected graphs and the plurality of position tags to obtain the graph neural positioning model, where the initial positioning model includes a plurality of mutually independent intermediate graph neural networks.
In embodiments of the present application, the data involved (e.g., including but not limited to user personal information) is collected, updated, analyzed, processed, used, transmitted, provided, disclosed, stored, etc., all in compliance with relevant legal regulations, used for legal purposes, and without violating the public welfare. In particular, necessary measures are taken for personal information of the user, illegal access to personal information data of the user is prevented, and personal information safety and network safety of the user are maintained.
In embodiments of the present application, the user's authorization or consent is obtained before the user's personal information is obtained or collected.
FIG. 1 illustrates an exemplary system architecture 100 in which a training method of a graph neural localization model or a localization method of a target object may be applied in accordance with an embodiment of the present application. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present application may be applied to help those skilled in the art understand the technical content of the present application, and does not mean that the embodiments of the present application may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include a first terminal device 101, a second terminal device 102, a third terminal device 103, a network 104, and a server 105. The network 104 is a medium used to provide a communication link between the first terminal device 101, the second terminal device 102, the third terminal device 103, and the server 105. The network 104 may include various connection types, such as wired and/or wireless communication links, and the like.
The user may interact with the server 105 via the network 104 using the first terminal device 101, the second terminal device 102, the third terminal device 103, to receive or send messages etc. Various communication client applications, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, and/or social platform software, etc. (by way of example only) may be installed on the first terminal device 101, the second terminal device 102, the third terminal device 103.
The first terminal device 101, the second terminal device 102, the third terminal device 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for websites browsed by the user using the first terminal device 101, the second terminal device 102, and the third terminal device 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (e.g., the web page, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that, the training method of the neural positioning model or the positioning method of the target object provided in the embodiment of the present application may be generally executed by the server 105. Accordingly, the training device of the neural positioning model or the positioning device of the target object provided in the embodiment of the present application may be generally disposed in the server 105. The training method of the neural positioning model or the positioning method of the target object provided by the embodiment of the present application may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the first terminal device 101, the second terminal device 102, the third terminal device 103, and/or the server 105. Accordingly, the training apparatus of the neural positioning model or the positioning apparatus of the target object provided in the embodiment of the present application may also be disposed in a server or a server cluster that is different from the server 105 and is capable of communicating with the first terminal device 101, the second terminal device 102, the third terminal device 103, and/or the server 105. Or the training method of the neural positioning model or the positioning method of the target object provided by the embodiment of the present application may also be performed by the first terminal device 101, the second terminal device 102 or the third terminal device 103, or may also be performed by other terminal devices different from the first terminal device 101, the second terminal device 102 or the third terminal device 103. Accordingly, the training apparatus of the neural positioning model or the positioning apparatus of the target object provided in the embodiment of the present application may also be disposed in the first terminal device 101, the second terminal device 102, or the third terminal device 103, or disposed in other terminal devices different from the first terminal device 101, the second terminal device 102, or the third terminal device 103.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely. There may be any number of terminal devices, networks, and servers, as desired for implementation.
FIG. 2 illustrates a flow chart of a method of training the neural localization model, according to an embodiment of the application.
As shown in FIG. 2, the training method of the neural positioning model includes operations S201-S203.
In operation S201, a first training set and a second training set are obtained, wherein the first training set includes a plurality of first channel state undirected graphs, and the second training set includes a plurality of second channel state undirected graphs and a position tag of a target object corresponding to each of the second channel state undirected graphs;
in operation S202, pre-training an initial graph neural network by using a plurality of first channel state undirected graphs to obtain an intermediate graph neural network;
in operation S203, an initial positioning model is trained using a plurality of second channel state undirected graphs and a plurality of position labels to obtain a graph neural positioning model, wherein the initial positioning model includes a plurality of independent intermediate graph neural networks.
According to the embodiment of the application, the target object can be a terminal such as a user, a mobile phone and the like, a pet and the like.
According to an embodiment of the present application, the channel state undirected graph includes a plurality of nodes and edges between different nodes, the nodes are constructed according to attribute information in a channel state information matrix of the wireless device, and the edges are constructed according to amplitude, phase difference or Channel Impulse Response (CIR) of a receiving antenna of the wireless device.
According to embodiments of the application, the initial Graph neural network may be any type of Graph neural network (Graph Neural Network, GNN), such as a Graph roll-up network (Graph Convolutional Network, GCN), graph Auto-encoder (GAE), or the like.
According to the embodiment of the application, the initial graph neural network is pre-trained by utilizing a plurality of first channel state undirected graphs, so that a pre-trained intermediate graph neural network is obtained, and then the intermediate graph neural network is trained with labels by utilizing the second channel state undirected graphs and the position labels again, so that a graph neural positioning model is obtained.
According to the embodiment of the application, the training is performed by using the first channel state undirected graph, and then the training is performed on the pre-trained intermediate graph neural network by using the second channel state undirected graph and the position label, so that the graph neural positioning model capable of being used for positioning the target object is obtained. Because the high flexibility of the graph structure is fully utilized, a general graph nerve positioning model facing to the channel state information of the actual commercial scene is realized. Meanwhile, model pre-training learning is carried out through a large number of unlabeled first channel state undirected graphs, positioning robustness of the graph neural positioning model is improved, an uncertainty learning strategy is introduced in pre-training to cope with complex and changeable environments in practical application, and positioning stability and reliability of the graph neural positioning model are further improved.
According to an embodiment of the present application, the first training set further includes a floor pseudo tag of the target object corresponding to each of the first channel state undirected graphs.
According to an embodiment of the present application, the initial graph neural network is pre-trained by using a plurality of first channel state undirected graphs to obtain an intermediate graph neural network, including:
Aiming at two first channel state undirected graphs with association relation, processing the two first channel state undirected graphs by utilizing an initial graph neural network to obtain a first output set, wherein the first output set comprises time dimension features and space dimension features corresponding to each first channel state undirected graph, and the space dimension features comprise predicted floors and predicted positions of the first channel state undirected graphs;
respectively processing the two space dimension features by using a measurement loss function to obtain a first loss value corresponding to each first channel state undirected graph;
Respectively processing the two time dimension features by using a contrast loss function to obtain second loss values corresponding to the two first channel state undirected graphs;
Processing space dimension characteristics and floor pseudo labels by using an average absolute error loss function aiming at each first channel state undirected graph to obtain a third loss function;
Generating a pre-training loss value according to the first loss value, the plurality of second loss values and the plurality of third loss functions;
And iteratively adjusting network parameters of the initial graph neural network according to the pre-training loss value to obtain the trained intermediate graph neural network.
According to the embodiment of the application, the association relation characterization is associated in time, and the two first channel state undirected graphs with the association relation can be two first channel state undirected graphs with similar acquisition time.
According to an embodiment of the application, a first loss valueAs shown in formula (1), the second loss valueAs shown in formula (2), the pre-training loss valueAs shown in formula (3):
(1)
Wherein,For the predicted position of the target object corresponding to the i-th first channel state undirected graph,AndThe true locations of the two wireless devices corresponding to the first channel state undirected graph,Is a super parameter;
(2)
Wherein,AndRespectively represent two first channel state undirected graphsAndIs a time dimension feature of (a);
(3)
Wherein,AndA third loss function corresponding to the two first channel state undirected graphs,AndRespectively corresponding to the predicted floors of the two first channel state undirected graphs,AndEach being a floor pseudo tag corresponding to two first channel state undirected graphs, the floor pseudo tags being determined from floors of wireless devices associated with the first channel state undirected graphs.
According to an embodiment of the application, a large amount of unlabeled data (i.e., a first channel state undirected graph) is pre-trained from both the temporal and spatial dimensions. In particular, when analyzing unlabeled first channel state undirected graphs, the time stamp information not only serves as a readily available resource, but also provides valuable a priori information for positioning, taking into account the finite speed of movement of the user (i.e. a target object). It can reasonably be assumed that two first channel state undirected graphs collected from the same user over a short time interval are likely to come from geographically close places, whether the user is stationary or mobile. In contrast, if the acquisition time interval between the first channel state undirected graphs is long, or from devices of different users, the challenges in determining spatial similarity may increase significantly. The scheme is implemented by using a contrast loss function as shown in formula (2) below.
In accordance with embodiments of the present application, in addition to time domain information, signals received by most wireless-enabled wireless devices also provide a priori knowledge of the spatial dimensions for positioning. In one positioning event (LocEvent), multiple access points (wireless device APs) are involved in measuring the current location of the transmitting end at a given time. This arrangement means that the relative received power at the different APs can be used to generate preliminary positioning results. Typically, an AP that records higher relative power is more likely to be near the transmitting end under line-of-sight conditions. Assuming that the locations of the APs are known, these locations are typically obtained through Computer Aided Design (CAD) drawings or noted during the first channel state undirected graph collection process. Thus, at a predicted position d in the output dimension, it can be defined that the predicted position d is closer to the AP receiving the higher signal power. The scheme is implemented by using a metric learning loss function as shown in formula (1).
According to an embodiment of the application, the calculation of the third loss function is performed using a spatial prior using the floor number of the nearest AP as a floor pseudo tag. Finally, the pretraining loss value of the pretraining process is shown in formula (3).
According to an embodiment of the present application, training an initial positioning model using a plurality of second channel state undirected graphs and a plurality of position tags to obtain a graph neural positioning model includes:
Aiming at each intermediate graph neural network, processing each second channel state undirected graph by using the intermediate graph neural network to obtain a second output set, wherein the second output set comprises an abscissa predicted value, an ordinate predicted value, a floor predicted value, an abscissa variance and an ordinate variance;
generating abscissa probability distribution information and ordinate probability distribution information according to the abscissa predicted value and the ordinate predicted value respectively;
Generating probability distribution divergences according to probability distribution information and coordinate tag function values corresponding to the probability distribution information aiming at any one of abscissa probability distribution information and ordinate probability distribution information, wherein the position tags comprise an abscissa tag, an ordinate tag and a floor tag;
Generating a floor loss value according to the floor predicted value and the floor label;
Generating a combined loss value according to the probability distribution divergence corresponding to the abscissa, the probability distribution divergence corresponding to the ordinate and the floor loss value;
And iteratively adjusting network parameters of the intermediate graph neural network according to the combined loss value to obtain a trained target graph neural network, wherein the graph neural positioning model comprises a plurality of target graph neural networks.
According to an embodiment of the application, probability distribution divergence of abscissa probability distribution informationAs shown in equation (4), the combined loss value is shown in equation (5):
(4)
Wherein,As the predicted value of the abscissa of the graph,Is the mean value of the predicted values of the abscissa,As the variance of the abscissa of the line,The impulse response impact function value is obtained based on DIRAC DELTA functions by taking the abscissa tag as the center, namely the coordinate tag function value;
(5)
Wherein,Probability distribution divergences that are abscissa probability distribution information,Probability distribution divergences that are abscissa probability distribution information,As a floor loss value, the floor loss value,As a predicted value of the floor level,Is the variance of the floor label.
According to the embodiment of the application, based on uncertainty self-adaption, the further precision improvement is realized by utilizing the graph neural network. The application improves the traditional coordinate point prediction task to ensure that the coordinate point prediction task not only comprises the mean valueAlso included are predictions of variance sigma, such as abscissa variance, ordinate variance, defining a probability distribution of abscissa predictors x, ordinate variance. Giving a DIRAC DELTA function centered on the position tag (e.g. abscissa tag x or ordinate tag y)Minimizing probability distribution by equation (4)And (3) withKL divergence between, i.e. probability distribution divergence of the abscissa.
According to an embodiment of the present application, the probability distribution divergence of the ordinate is obtained simultaneously based on the above principle, and furthermore, the floor loss value between the floor predicted value and the floor tag is calculated based on the L1 loss function. The combined loss value in this training can be calculated according to equation (5).
According to an embodiment of the present application, any one of the first channel state undirected graph and the second channel state undirected graph is generated by:
Acquiring channel state matrixes generated by different wireless devices, wherein the channel state matrixes are generated by the wireless devices in the same time window when the wireless devices interact with the transmitting end of the target object;
Aiming at each channel state matrix, constructing nodes in the channel state undirected graph according to the attribute in the channel state matrix;
Based on the amplitude and the receiving attribute of the receiving antenna of the wireless device in the channel state matrix, generating edges between different nodes in the constructed channel state undirected graph, wherein the receiving attribute comprises channel impulse response information or phase differences of different receiving antennas.
In a specific embodiment, the scenario is configured with one transmitting end and two receiving ends (i.e., wireless devices). This scenario involves two channel state matrices, one for eachAndEach channel state matrix has dimensions of. Constructing a channel state undirected graph comprising two sets of amplitudes and Channel Impulse Responses (CIRs) using five nodesAndAnd two sets of phase differencesAnd. These channel characteristics serve as characteristic data for each corresponding node. To ensure consistency, we normalize the feature dimension to 245 and fill with zeros if necessary, or apply downsampling when that dimension is exceeded. In addition, other supplementary information, such as location information of an Access Point (AP), center frequency of CSI, and Received Signal Strength (RSS), may be encoded to further enrich the representation of the channel state undirected graph. Edges are formed in the graph structure by establishing a connection based on the amplitude and phase difference of each receive antenna, or between the amplitude and CIR of the same receive antenna.
According to an embodiment of the application, for more complex heterogeneous CSI dataI.e., channel state information, is constructed based on the channel state undirected graph generation approach described above. When processing a plurality of transmitting antennas, the transmitting antennas are firstly decomposed into pairs consisting of a transmitting end and two receiving ends, and then the undirected graph is constructed according to the above-described channel state undirected graph generating mode. Amplitude nodes from the same transmit antennas are then connected to complete the construction. For multiple receive antennas, the construction process is similar to the case of two antennas. At the completion of eachAfter construction of (a), each of the plurality of positioning events LocEvent can be performedThe amplitude nodes with the highest RSS in the channel state undirected graph of LocEvent are connected together, thereby completing the channel state undirected graphIs a construction of (3).
Fig. 3 shows a flow chart of a method of locating a target object according to an embodiment of the application.
As shown in FIG. 3, the target object positioning method includes operations S301-S304.
In operation S301, in a case where a transmitting end of a target object interacts with a plurality of wireless devices, acquiring a target channel state matrix generated by the plurality of wireless devices;
in operation S302, for each target channel state matrix, a target channel state undirected graph is generated according to the target channel state matrix;
In operation S303, a target channel state undirected graph is input to a graph neural positioning model to obtain a plurality of initial position prediction sets;
in operation S304, a weighted average process is performed on the plurality of initial position prediction sets to obtain a target position of the target object.
According to the embodiment of the application, in the process that a target object interacts with a plurality of wireless devices by using a mobile phone login transmitting end, a plurality of target channel state matrixes in the same time window are obtained, conversion is carried out based on the channel state undirected graph generating method, so that a plurality of target channel state undirected graphs are obtained, the target channel state undirected graphs are input into a trained graph nerve positioning model, a plurality of initial position prediction sets can be obtained, and then weighted average processing is carried out on the plurality of initial position prediction sets, so that the target position of the target object can be obtained.
According to the embodiment of the application, the training is performed by using the first channel state undirected graph, and then the training is performed on the pre-trained intermediate graph neural network by using the second channel state undirected graph and the position label, so that the graph neural positioning model capable of being used for positioning the target object is obtained. Because the high flexibility of the graph structure is fully utilized, a general graph nerve positioning model facing to the channel state information of the actual commercial scene is realized. Meanwhile, model pre-training learning is carried out through a large number of unlabeled first channel state undirected graphs, positioning robustness of the graph neural positioning model is improved, an uncertainty learning strategy is introduced in pre-training to cope with complex and changeable environments in practical application, and positioning stability and reliability of the graph neural positioning model are further improved.
According to an embodiment of the present application, the initial position prediction set includes an abscissa prediction mean, an abscissa prediction variance, an ordinate prediction mean, and an ordinate prediction variance.
According to an embodiment of the application, the target abscissa of the target positionAs shown in equation (6), the target ordinate of the target positionAs shown in formula (7):
(6)
(7)
Wherein,The mean value is predicted for the abscissa,The variance is predicted for the abscissa and,The mean value is predicted for the ordinate,And the z is the z-th target graph neural network in the graph nerve positioning model.
According to an embodiment of the present application, the number z of target graph neural networks may be 5. Each target graph neural network has the same structure and training data but uses different random seeds. Then, a final coordinate estimate is calculated using a weighted average formula, and a final target abscissaOrdinate of targetAs shown in equations (6) and (7). For floor numbers, a similar weighted loss function may be used for calculation.
Fig. 4 shows a schematic view of a building level indoor positioning scenario according to a first embodiment of the present application. Fig. 5 shows a schematic view of a data set point collected by a building level indoor location according to a first embodiment of the application. Fig. 6 shows a 50 th (median) and 90 th (tail) percentile error score and floor accuracy diagram of different apparatus and methods according to the first embodiment of the present application.
In a first specific embodiment, verification is performed using a building level indoor positioning scenario as shown in fig. 4 and 5. The data set used contains about 40,000 samples (i.e., channel state matrix) that are collected by seven different types of collection devices, including various types of cell phones or tablet computers. During data acquisition, these devices are used in a hand-held or pocket-placed manner.
In an actual location environment, users typically have only a limited type of mobile device, but are faced with many different models of handsets in a daily location service. Therefore, to simulate a more challenging and realistic scenario, the present application proposes and validates the performance test method of "leave-mobile type". The method involves excluding a particular type of handset from the training dataset and using only the handset during the test phase to evaluate the generalization ability of the graphical nerve localization model between different device types.
Experimental results show that the method can effectively code heterogeneous and comprehensive Channel State Information (CSI) in a positioning event (LocEvent), so that stronger coding capability is shown. However, as shown in fig. 6, such detailed encoding presents a challenge in terms of generalization capability across devices. Particularly in the case of training from scratch, the performance of the application on some devices is reduced. Aiming at the problem, the generalization capability of the model on different devices is remarkably improved by introducing a pre-training strategy and an uncertainty evaluation mechanism.
Further experimental results (as shown in fig. 6) compare the 50% (median) and 90% (tail) percentile error scores, as well as the floor positioning accuracy (Acc%) of the various devices and methods. The results show that the application has significant advantages in terms of computing speed and memory usage. Statistical analysis of all test devices showed that the median positioning error of the method of the present application was 2.17 meters and the 90% percentile error was 8.93 meters, in contrast to the vector-based method, which had a median error of 2.28 meters and a 90% percentile error of 12.16 meters. In addition, the floor positioning accuracy of the application reaches 99.49 percent, which is higher than 99.29 percent of the baseline method.
Overall, the application achieves an 18.7% improvement in mean absolute error from 4.64 meters down to 3.77 meters, while also being superior to the baseline approach in computational speed and memory usage. These advantages verify the efficiency and reliability of the present application in complex practical scenarios.
Fig. 7 shows a schematic view of a data set point location acquired for building level indoor positioning according to a second embodiment of the application. Fig. 8 shows a schematic view of a data set point location acquired by building level indoor positioning according to a second embodiment of the application.
In a second specific embodiment, the evaluation was performed in a scene of about 4000 square meters (as shown in fig. 7), and the data acquisition covered five different types of smartphones, for a total of about 30,000 samples collected. The application combines the time dimension and the space dimension with the floor pseudo tag for experiments respectively. As shown in the experimental results of fig. 8, the time constraint can significantly improve the performance of the model when the training sample size is less than 60% of the total data size, but the improvement effect tends to be smooth when the data size is increased. In contrast, the topology constraints continue to enhance model performance at all sample sizes, although marginal gains in performance improvement gradually decrease as the amount of data increases. Finally, by integrating all components and architectures into the framework of the present application, optimal performance is achieved.
FIG. 9 illustrates a block diagram of a training apparatus of the neural localization model, according to an embodiment of the present application.
As shown in fig. 9, the training apparatus 900 of the neural positioning model includes a first acquisition module 910, a pre-training module 920, and a target training module 930.
A first obtaining module 910, configured to obtain a first training set and a second training set, where the first training set includes a plurality of first channel state undirected graphs, and the second training set includes a plurality of second channel state undirected graphs and a position tag of a target object corresponding to each of the second channel state undirected graphs;
The pre-training module 920 is configured to pre-train the initial graph neural network by using a plurality of first channel state undirected graphs to obtain an intermediate graph neural network;
the target training module 930 is configured to train an initial positioning model using the plurality of second channel state undirected graphs and the plurality of position labels to obtain a graph neural positioning model, where the initial positioning model includes a plurality of independent intermediate graph neural networks.
According to the embodiment of the application, the training is performed by using the first channel state undirected graph, and then the training is performed on the pre-trained intermediate graph neural network by using the second channel state undirected graph and the position label, so that the graph neural positioning model capable of being used for positioning the target object is obtained. Because the high flexibility of the graph structure is fully utilized, a general graph nerve positioning model facing to the channel state information of the actual commercial scene is realized. Meanwhile, model pre-training learning is carried out through a large number of unlabeled first channel state undirected graphs, positioning robustness of the graph neural positioning model is improved, an uncertainty learning strategy is introduced in pre-training to cope with complex and changeable environments in practical application, and positioning stability and reliability of the graph neural positioning model are further improved.
According to an embodiment of the present application, the first training set further includes a floor pseudo tag of the target object corresponding to each of the first channel state undirected graphs.
According to an embodiment of the application, the pre-training module 920 includes:
The first obtaining unit is used for processing the two first channel state undirected graphs by using an initial graph neural network aiming at any two first channel state undirected graphs to obtain a first output set, wherein the first output set comprises time dimension characteristics and space dimension characteristics corresponding to each first channel state undirected graph, and the space dimension characteristics comprise predicted floors and predicted positions of the first channel state undirected graphs;
the second obtaining unit is used for respectively processing the two space dimension features by using a measurement loss function to obtain a first loss value corresponding to each first channel state undirected graph;
the third obtaining unit is used for respectively processing the two time dimension characteristics by utilizing the contrast loss function to obtain second loss values corresponding to the two first channel state undirected graphs;
A fourth obtaining unit, configured to process, for each first channel state undirected graph, a space dimension feature and a floor pseudo tag by using an average absolute error loss function, to obtain a third loss function;
A first generation unit for generating a pre-training loss value according to the first loss value, the plurality of second loss values and the plurality of third loss functions;
and a fifth obtaining unit, configured to iteratively adjust network parameters of the initial graph neural network according to the pre-training loss value, to obtain a trained intermediate graph neural network.
According to an embodiment of the present application, the target training module 930 includes:
A sixth obtaining unit, configured to process, for each intermediate graph neural network, each second channel state undirected graph by using the intermediate graph neural network, to obtain a second output set, where the second output set includes an abscissa predicted value, an ordinate predicted value, a floor predicted value, an abscissa variance, and an ordinate variance;
The second generation unit is used for generating abscissa probability distribution information and ordinate probability distribution information according to the abscissa predicted value and the ordinate predicted value respectively;
The third generation unit is used for generating probability distribution divergences according to any probability distribution information in the abscissa probability distribution information and the ordinate probability distribution information and the coordinate label function value corresponding to the probability distribution information, wherein the position labels comprise an abscissa label, an ordinate label and a floor label;
a fourth generating unit, configured to generate a floor loss value according to the floor predicted value and the floor label;
A fifth generation unit for generating a combined loss value according to the probability distribution divergence corresponding to the abscissa, the probability distribution divergence corresponding to the ordinate, and the floor loss value;
And a seventh obtaining unit, configured to iteratively adjust network parameters of the intermediate graph neural network according to the combined loss value to obtain a trained target graph neural network, where the graph neural positioning model includes a plurality of target graph neural networks.
According to an embodiment of the present application, any one of the first channel state undirected graph and the second channel state undirected graph is generated by:
The system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring channel state matrixes generated by different wireless devices, wherein the channel state matrixes are generated by the wireless devices in the same time window when the wireless devices interact with the transmitting end of a target object;
The construction unit is used for constructing nodes in the channel state undirected graph according to the attribute in the channel state matrix aiming at each channel state matrix;
And a sixth generating unit, configured to generate an edge between different nodes in the channel state undirected graph based on the amplitude and the reception attribute of the receiving antenna of the wireless device in the channel state matrix, where the reception attribute includes channel impulse response information or phase differences of the different receiving antennas.
Fig. 10 shows a block diagram of a target object positioning device according to an embodiment of the application.
As shown in fig. 10, the positioning apparatus 1000 of the target object includes a second acquisition module 1010, a generation module 1020, a prediction module 1030, and an obtaining module 1040.
A second obtaining module 1010, configured to obtain a target channel state matrix generated by a plurality of wireless devices when a transmitting end of a target object interacts with the plurality of wireless devices;
a generating module 1020, configured to generate, for each target channel state matrix, a target channel state undirected graph according to the target channel state matrix;
the prediction module 1030 is configured to input the target channel state undirected graph to a graph neural positioning model to obtain a plurality of initial position prediction sets;
the obtaining module 1040 is configured to perform weighted average processing on the plurality of initial position prediction sets, so as to obtain a target position of the target object.
According to an embodiment of the present application, the initial position prediction set includes an abscissa prediction mean, an abscissa prediction variance, an ordinate prediction mean, and an ordinate prediction variance.
According to the embodiment of the application, the training is performed by using the first channel state undirected graph, and then the training is performed on the pre-trained intermediate graph neural network by using the second channel state undirected graph and the position label, so that the graph neural positioning model capable of being used for positioning the target object is obtained. Because the high flexibility of the graph structure is fully utilized, a general graph nerve positioning model facing to the channel state information of the actual commercial scene is realized. Meanwhile, model pre-training learning is carried out through a large number of unlabeled first channel state undirected graphs, positioning robustness of the graph neural positioning model is improved, an uncertainty learning strategy is introduced in pre-training to cope with complex and changeable environments in practical application, and positioning stability and reliability of the graph neural positioning model are further improved. Any number of the modules, sub-modules, units, sub-units, or at least part of the functionality of any number of the sub-units according to embodiments of the application may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present application may be implemented as a split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the application may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), programmable Logic Array (PLA), system-on-chip, system-on-substrate, system-on-package, application Specific Integrated Circuit (ASIC), or in hardware or firmware in any other reasonable manner of integrating or packaging circuitry, or in any one of, or in any suitable combination of, software, hardware, and firmware. Or one or more of the modules, sub-modules, units, sub-units according to embodiments of the application may be at least partly implemented as computer program modules which, when run, may perform the corresponding functions.
For example, any number of the first acquisition module 910, the pre-training module 920, the target training module 930, or the second acquisition module 1010, the generation module 1020, the prediction module 1030, the obtaining module 1040 may be combined in one module/unit/sub-unit, or any one of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Or at least some of the functionality of one or more of these modules/units/sub-units may be combined with at least some of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to embodiments of the application, at least one of the first acquisition module 910, the pre-training module 920, the target training module 930, or the second acquisition module 1010, the generation module 1020, the prediction module 1030, the derivation module 1040 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of integrating or packaging the circuits, such as hardware or firmware, or in any one of or a suitable combination of any of the three implementations of software, hardware and firmware. Or at least one of the first acquisition module 910, the pre-training module 920, the target training module 930, or the second acquisition module 1010, the generation module 1020, the prediction module 1030, the derivation module 1040 may be at least partially implemented as a computer program module, which when executed, may perform the corresponding functions.
It should be noted that, in the embodiment of the present application, the training device of the graphic nerve positioning model or the positioning device of the target object corresponds to the training method of the graphic nerve positioning model or the positioning method of the target object, and the description of the training device of the graphic nerve positioning model or the positioning device of the target object specifically refers to the training method of the graphic nerve positioning model or the positioning method of the target object, which are not described herein.
Fig. 11 shows a block diagram of an electronic device adapted to implement the method described above, according to an embodiment of the application. The electronic device shown in fig. 11 is only an example, and should not impose any limitation on the functions and scope of use of the embodiments of the present application.
As shown in fig. 11, an electronic device 1100 according to an embodiment of the present application includes a processor 1101 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1102 or a program loaded from a storage section 1108 into a Random Access Memory (RAM) 1103. The processor 1101 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 1101 may also include on-board memory for caching purposes. The processor 1101 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flow according to an embodiment of the application.
In the RAM 1103, various programs and data necessary for the operation of the electronic device 1100 are stored. The processor 1101, ROM 1102, and RAM 1103 are connected to each other by a bus 1104. The processor 1101 performs various operations of the method flow according to the embodiment of the present application by executing programs in the ROM 1102 and/or the RAM 1103. Note that the program may be stored in one or more memories other than the ROM 1102 and the RAM 1103. The processor 1101 may also perform various operations of the method flow according to an embodiment of the present application by executing programs stored in the one or more memories.
According to an embodiment of the application, the electronic device 1100 may also include an input/output (I/O) interface 1105, the input/output (I/O) interface 1105 also being connected to the bus 1104. The electronic device 1100 may also include one or more of an input section 1106 including a keyboard, mouse, etc., an output section 1107 including a display such as a Cathode Ray Tube (CRT), liquid Crystal Display (LCD), etc., and speakers, etc., a storage section 1108 including a hard disk, etc., and a communication section 1109 including a network interface card such as a LAN card, modem, etc., connected to an input/output (I/O) interface 1105. The communication section 1109 performs communication processing via a network such as the internet. The drive 1110 is also connected to an input/output (I/O) interface 1105 as required. Removable media 1111, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is installed as needed in drive 1110, so that a computer program read therefrom is installed as needed in storage section 1108.
According to an embodiment of the present application, the method flow according to an embodiment of the present application may be implemented as a computer software program. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program can be downloaded and installed from a network via the communication portion 1109, and/or installed from the removable media 1111. The above-described functions defined in the system of the embodiment of the present application are performed when the computer program is executed by the processor 1101. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the application.
The present application also provides a computer-readable storage medium that may be included in the apparatus/device/system described in the above embodiments, or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present application.
According to an embodiment of the present application, the computer-readable storage medium may be a nonvolatile computer-readable storage medium. Such as, but not limited to, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the application, the computer-readable storage medium may include ROM 1102 and/or RAM 1103 described above and/or one or more memories other than ROM 1102 and RAM 1103.
Embodiments of the present application also include a computer program product comprising a computer program comprising program code for performing the methods provided by the embodiments of the present application, when the computer program product is run on an electronic device, for causing the electronic device to carry out the methods provided by the embodiments of the present application.
The above-described functions defined in the system/apparatus of the embodiment of the present application are performed when the computer program is executed by the processor 1101. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the application.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program can also be transmitted, distributed over a network medium in the form of signals, downloaded and installed via the communication portion 1109, and/or installed from the removable media 1111. The computer program may comprise program code that is transmitted using any appropriate network medium, including but not limited to wireless, wireline, etc., or any suitable combination of the preceding.
The embodiments of the present application are described above. These examples are for illustrative purposes only and are not intended to limit the scope of the present application. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The present application may be modified in various ways by those skilled in the art without departing from the scope of the application, and such modifications and alterations should fall within the scope of the application.

Claims (9)

CN202510072849.8A2025-01-172025-01-17 Training method of graph neural localization model, method and device for localizing target objectActiveCN119513612B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202510072849.8ACN119513612B (en)2025-01-172025-01-17 Training method of graph neural localization model, method and device for localizing target object

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202510072849.8ACN119513612B (en)2025-01-172025-01-17 Training method of graph neural localization model, method and device for localizing target object

Publications (2)

Publication NumberPublication Date
CN119513612A CN119513612A (en)2025-02-25
CN119513612Btrue CN119513612B (en)2025-05-06

Family

ID=94668526

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202510072849.8AActiveCN119513612B (en)2025-01-172025-01-17 Training method of graph neural localization model, method and device for localizing target object

Country Status (1)

CountryLink
CN (1)CN119513612B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111966831A (en)*2020-08-182020-11-20创新奇智(上海)科技有限公司Model training method, text classification device and network model
CN113240071A (en)*2021-05-132021-08-10平安科技(深圳)有限公司Graph neural network processing method and device, computer equipment and storage medium
CN117939402A (en)*2023-12-282024-04-26浙江大学金华研究院CSI indoor positioning method and device based on graph neural network

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11340345B2 (en)*2015-07-172022-05-24Origin Wireless, Inc.Method, apparatus, and system for wireless object tracking
CN111865489B (en)*2020-06-112022-10-28东南大学Multiple-input multiple-output detection method based on graph neural network
US12052121B2 (en)*2020-09-012024-07-30Qualcomm IncorporatedNeural network based line of sight detection and angle estimation for positioning
CN115018073A (en)*2022-08-092022-09-06之江实验室 A method and system for prediction of spatiotemporal perception information based on graph neural network
KR102735566B1 (en)*2022-12-212024-11-29한국과학기술원Deep-Neural-Network-Based Adaptive Human Recognition Method and System Exploiting WiFi Channel State Information in Indoor Environment
WO2024205463A1 (en)*2023-03-242024-10-03Telefonaktiebolaget Lm Ericsson (Publ)Methods and apparatuses for training a graph neural network
CN117041864A (en)*2023-07-262023-11-10南京邮电大学Target positioning method and system based on CSI amplitude-phase information composition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111966831A (en)*2020-08-182020-11-20创新奇智(上海)科技有限公司Model training method, text classification device and network model
CN113240071A (en)*2021-05-132021-08-10平安科技(深圳)有限公司Graph neural network processing method and device, computer equipment and storage medium
CN117939402A (en)*2023-12-282024-04-26浙江大学金华研究院CSI indoor positioning method and device based on graph neural network

Also Published As

Publication numberPublication date
CN119513612A (en)2025-02-25

Similar Documents

PublicationPublication DateTitle
US20160356665A1 (en)Pipeline monitoring systems and methods
Goswami et al.WiGEM: A learning-based approach for indoor localization
US9288629B2 (en)Mobile device positioning system
CN112218330B (en)Positioning method and communication device
CN111867049A (en) Positioning method, device and storage medium
US12402019B2 (en)Method, electronic device and non-transitory computer-readable storage medium for determining indoor radio transmitter distribution
Bai et al.A new method for improving Wi-Fi-based indoor positioning accuracy
US9867041B2 (en)Methods and systems for determining protected location information based on temporal correlations
US20230362039A1 (en)Neural network-based channel estimation method and communication apparatus
CN105143909A (en)System, method and computer program for dynamic generation of a radio map
RedondiRadio map interpolation using graph signal processing
US11576054B2 (en)Creating and using cell clusters
US20090144028A1 (en)Method and apparatus of combining mixed resolution databases and mixed radio frequency propagation techniques
CN108769902B (en)Target positioning method and device, computer equipment and storage medium
CN118678377A (en)Spectrum coverage map construction method and device based on heterogeneous graph neural network
CN119316802B (en)Space division method, apparatus, computer device, storage medium, and program product
CN101729163A (en)Method and equipment for determining interference source of mobile communication
CN119513612B (en) Training method of graph neural localization model, method and device for localizing target object
CN118659811A (en) Method and system for improving signal connection strength based on network communication equipment
CN114449439A (en) Method and device for positioning underground pipe gallery space
WO2020258509A1 (en)Method and device for isolating abnormal access of terminal device
Wu et al.Cost-efficient indoor white space exploration through compressive sensing
CN113721191B (en) Signal source localization method and system for improving matrix completion performance by adaptive rasterization
Liu et al.Edge big data-enabled low-cost indoor localization based on Bayesian analysis of RSS
Laó AmoresData augmentation models for improved indoor positioning accuracy using RSS Fingerprinting

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp