Movatterモバイル変換


[0]ホーム

URL:


CN119199844B - A high-resolution imaging method for through-wall radar targets based on conditional diffusion model - Google Patents

A high-resolution imaging method for through-wall radar targets based on conditional diffusion model

Info

Publication number
CN119199844B
CN119199844BCN202411229838.8ACN202411229838ACN119199844BCN 119199844 BCN119199844 BCN 119199844BCN 202411229838 ACN202411229838 ACN 202411229838ACN 119199844 BCN119199844 BCN 119199844B
Authority
CN
China
Prior art keywords
noise
target
image
radar
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411229838.8A
Other languages
Chinese (zh)
Other versions
CN119199844A (en
Inventor
曾小路
杨小鹏
陈子涵
廖健成
龚俊波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BITfiledCriticalBeijing Institute of Technology BIT
Priority to CN202411229838.8ApriorityCriticalpatent/CN119199844B/en
Publication of CN119199844ApublicationCriticalpatent/CN119199844A/en
Application grantedgrantedCritical
Publication of CN119199844BpublicationCriticalpatent/CN119199844B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention provides a through-wall radar target resolution imaging method based on a conditional diffusion model, which is characterized in that a noise reduction network is designed by establishing a Markov process of forward diffusion and backward sampling, the iterative generation of a high-resolution optical image is controlled by utilizing radar image information, the limitation of the resolution of the traditional algorithm of a through-wall radar system is broken through, the appearance and contour information of a target are effectively recovered, the identifiability of the result is enhanced, the subsequent operation and use of the imaging result are convenient, that is, compared with other imaging methods, the method can carry out high-resolution imaging on the target in a shielding space and scene, recover the appearance and contour information of the target, improve and break through the resolution of the traditional through-wall radar imaging method, and provide an intuitive and identifiable imaging result.

Description

Through-wall radar target resolution imaging method based on conditional diffusion model
Technical Field
The invention belongs to the technical field of radar signal processing and through-wall radar imaging, and particularly relates to a through-wall radar target resolution imaging method based on a conditional diffusion model.
Background
With the acceleration of the world urban process, the urban building layout and structure are increasingly complex, the building density is increased, and part of urban inner space is seriously shielded, so that targets are easy to hide, serious threat is caused, and great potential safety hazard exists. At present, various practical and effective target detection methods exist, including infrared signals, X-rays, ultrasonic detection and the like, but different defects exist in practical scenes respectively, for example, the X-rays have extremely strong penetrability but have huge radiation hazard to human bodies, the ultrasonic detection precision and sensitivity are high, but are extremely easy to be influenced by environmental temperature noise, and the infrared signals have strong anti-interference capability but almost no penetrability and the like. Therefore, the research on a solution method for efficiently detecting the through-wall target is of great significance.
The through-wall radar imaging technology is a non-contact and non-destructive electromagnetic sensing technology, and has many advantages, such as strong penetrating power, strong anti-interference capability, wider detection range, and the like. And the target echo signal after the obstacle is received by utilizing the penetrability of the transmitting signal, and is analyzed, displayed or imaged, so that the target information of the shielding area is obtained, and the target positioning and tracking are realized. The application prospect is very wide in aspects of disaster relief, military detection medical detection and the like.
At present, many high-resolution algorithms are proposed in the field of through-wall radar imaging, however, for complex extended targets in actual scenes, performance limitations such as radar aperture, bandwidth and the like make improvement of target imaging resolution very weak, imaging results are mostly spots which lose the outline of the target and are unrecognizable, usability of the imaging results is greatly reduced, and through-wall radar high-resolution imaging technology is still a challenge in the field.
Disclosure of Invention
In order to solve the problems of low imaging resolution of the through-wall radar and difficult identification caused by missing of the outline information of the target, the invention provides a through-wall radar target resolution imaging method based on a conditional diffusion model, which comprises the steps of establishing a Markov process of forward diffusion and backward sampling, the noise reduction network is designed, the radar image information is utilized to control the iterative generation of the high-resolution optical image, the limitation of the resolution of the traditional algorithm of the through-wall radar system is broken through, the shape and outline information of the target are effectively recovered, the identifiability of the result is enhanced, and the follow-up operation and use of the imaging result are facilitated.
A through-wall radar target resolution imaging method based on a conditional diffusion model comprises the following steps:
s1, acquiring a three-dimensional radar image Iradar and a target depth image Ioptical of a target to be detected positioned behind a wall body;
S2, adding noise to the target depth image Ioptical time by time until the target depth image Ioptical is converted into a target noise image xT conforming to random Gaussian distribution, wherein T is the total number of noise adding time required when the target depth image Ioptical is converted into the target noise image;
S3, inputting the three-dimensional radar image Iradar, the moment T and the target noise image xT into a trained noise prediction network to obtain prediction noise;
s4, reversely denoising the target noise image xT by using the predicted noise to obtain a target noise image required by reverse denoising at the next moment;
S5, the current time is decremented by 1 to obtain the next time, and then the three-dimensional radar image Iradar, the next time and the latest obtained target noise image are input into a trained noise prediction network to obtain the prediction noise required by the next reverse noise reduction;
S6, re-executing the steps S4-S5 by using the latest obtained prediction noise and the latest obtained target noise image until T is reduced to 0, and taking the target noise image obtained when the iteration time is 0 as the final imaging of the target to be detected.
Further, in step S2, the method for acquiring the target noise image xT according with the random gaussian distribution includes:
wherein x0 is random noise sampled from standard Gaussian distribution when the target depth image Ioptical,∈T is the T noise adding moment,Variance variable at the time of the T-th noise adding, andΑk represents the difference between the noise variances βk and 1 of the random noise ek sampled at the kth noise adding time, and αk=1-βk, k=1, T;
Meanwhile, the method for acquiring the target intermediate noise image xt at any noise adding time t comprises the following steps:
where et is the random noise sampled from the standard gaussian distribution at the T-th noise addition time, t=1, a.t-1,Variance variable at the t-th noise adding time, andΑi represents the difference between the noise variances βi and 1 of the random noise ei sampled at the i-th noise adding time, and αi=1-βi, i=1.
Further, in step S4, the method for acquiring the target noise image required for the inverse noise reduction at any one of the next moments is as follows:
acquiring the average value muθ(xt, t, c) of the target noise image required by the reverse noise reduction at the next moment:
Wherein t represents the current reverse noise reduction time, fθ(xt, t, c) represents the predicted noise obtained by the current reverse noise reduction time t latest, c represents the three-dimensional radar image Iradar, θ represents the network parameter set of the predicted network, xt represents the target noise image obtained by the current reverse noise reduction time t latest; Is the variance variable at the t-th noise adding time corresponding to the t-th reverse noise reducing time, andΑi represents the difference between the noise variances βi and 1 of the random noise ei sampled at the i-th noise adding time, and αi=1-βi, i=1.
Taking the image with the mean value meeting muθ(xt, t and c) and the variance meeting (1-alphat) I as a target noise image xt-1 required for backward noise reduction at the next time t-1.
Further, in step S3, the training method of the noise prediction network is as follows:
s31, collecting three-dimensional radar images and target depth images of different targets positioned on the back of a wall body;
S32, acquiring target noise images of all targets according to target depth images of all targets;
S33, inputting the three-dimensional radar image of each target, the total number of the required noise adding moments T when the target depth image of each target is converted into a target noise image, and the target noise image of each target into a trained noise prediction network to obtain prediction noise;
S34, constructing a loss function L according to the prediction noise as follows:
wherein the inverse noise reduction time t=1, the term, T,Represents the predicted noise obtained at the inverse noise reduction time t, c represents the three-dimensional radar image Iradar, theta represents the network parameter set of the prediction network, epsilont represents the random noise obtained by sampling from the standard Gaussian distribution at the t-th noise addition time in the noise addition process of converting the target depth image into the target noise image, x0 represents the target depth image without noise addition, xt represents the target noise image obtained at any inverse noise reduction time t, andVariance variable at the t-th noise adding time, andΑi represents the difference between the noise variances βi and 1 of the random noise ei sampled at the i-th noise adding time, and αi=1-βi, i=1, t; Representing the p-norm; Representing the acquisition of different values of the random noise epsilont by taking the random noise epsilont as a variableEx,c denotes the acquisition desireIs not limited to the desired one;
S35, judging whether the loss function L is smaller than a set threshold value, if so, obtaining a final noise prediction network, otherwise, entering step S36;
and S36, changing the network parameter value in the network parameter set theta of the noise prediction network, and re-executing the steps S33-S35 by adopting the noise prediction network with the changed network parameter.
Further, in step S1, the method for acquiring the three-dimensional radar image Iradar of the target to be detected includes:
assuming that a wall body with the thickness d and the dielectric constant epsilon is arranged in the x-axis direction, the antenna and the object to be measured are free spaces in the positive and negative directions of the y-axis respectively except the wall body;
The radar works in a synthetic aperture mode, N receiving and transmitting antenna units with a distance d1 are used for transmitting step frequency continuous wave signals containing M frequency points, and if Q targets to be detected are arranged on the other side of the wall body, the received frequency domain echo signals S (M, N) of the nth receiving and transmitting antenna unit at the mth frequency point are as follows:
where σwall and σq represent the scattering coefficients of the wall and the Q-th object to be measured, q=1, 2, Q, Snoise (m, N) represents the noise signal of the nth transceiving antenna element at the mth frequency point, fm is the frequency of the mth frequency point, τwall represents the double-pass time delay between the N receiving and transmitting antenna units and the wall, and τq,n represents the double-pass time delay between the nth antenna and the Q-th target to be detected;
Respectively carrying out inverse Fourier transform on the frequency domain echo signals S (m, N) received by the N receiving and transmitting antenna units to obtain time domain echoes corresponding to the receiving and transmitting antenna units;
BP imaging is carried out on the time domain echo corresponding to each receiving and transmitting antenna unit respectively, and BP imaging In of each receiving and transmitting antenna unit in a target area is obtained;
Performing coherent superposition on BP imaging In of each receiving and transmitting antenna unit in a target area to obtain a superposition image
And weighting the superimposed image by adopting a phase coherence factor PCF to obtain a three-dimensional radar image Iradar=PCF·IBP of the target to be detected.
Further, the method for acquiring the phase coherence factor PCF comprises the following steps:
PCF=1-std(e-jφ)
Wherein, theRepresents the calculated standard deviation, j represents the imaginary part, phil represents the phase of the echo signal reflected by the object to be measured to the first transceiver antenna unit, the serial number of the transceiver antenna unit l=0, 1.
Further, in step S2, the method for acquiring the target depth image Ioptical of the target to be measured includes:
According to the perspective principle of light, an optical depth camera is arranged at the center of a transceiver antenna unit array of the radar, and a target depth image Ioptical which is under the same view angle with the radar and is matched with the three-dimensional radar image Iradar is obtained.
The beneficial effects are that:
The invention provides a through-wall radar target resolution imaging method based on a conditional diffusion model, which is characterized in that a noise reduction network is designed by establishing a Markov process of forward diffusion and backward sampling, the iterative generation of a high-resolution optical image is controlled by utilizing radar image information, the limitation of the resolution of the traditional algorithm of a through-wall radar system is broken through, the appearance and contour information of a target are effectively recovered, the identifiability of the result is enhanced, the subsequent operation and use of the imaging result are convenient, that is, compared with other imaging methods, the method can carry out high-resolution imaging on the target in a shielding space and scene, recover the appearance and contour information of the target, improve and break through the resolution of the traditional through-wall radar imaging method, and provide an intuitive and identifiable imaging result.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a schematic diagram of a signal scenario in the present invention;
FIG. 3 is a schematic diagram of phase coherence factor weighting in the present invention;
FIG. 4 is a forward diffusion and backward sampling process of conditional noise reduction diffusion probability logic employed in the present invention;
FIG. 5 is a hierarchical structure of a noise reduction network designed in accordance with the present invention;
FIG. 6 is an iterative high resolution imaging procedure of the present invention;
FIG. 7 is a schematic diagram of a simulation scenario of the present invention;
Fig. 8 is a schematic diagram of an experimental scenario of the present invention, where (a) is a MIMO radar used, (b) is a schematic diagram of a radar array element position, (c) is a non-through-wall scenario, (d) is a schematic diagram of a through-wall scenario, (e) is a schematic diagram of a radar wall-attaching mode in the through-wall scenario, and (f) is a schematic diagram of a post-wall target placement;
FIG. 9 is a simulation result graph of the present invention, wherein (a) is a simulation target graph, (b) is a three-dimensional radar imaging graph, (c) is a tangential view of the radar in front view and in top view, (d) is a high resolution imaging graph of the method of the present invention, and (e) is a target truth value tag;
Fig. 10 shows the results of the present invention, where (a) is the measured target image, (b) is the three-dimensional radar image, (c) is the radar elevation and plan section image, (d) is the high resolution image of the proposed method, and (e) is the target truth label.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions according to the embodiments of the present application with reference to the accompanying drawings.
In order to overcome the problems of insufficient resolution and weak imaging identifiability of the conventional through-wall radar imaging algorithm, the invention provides a through-wall radar target resolution imaging method based on a conditional diffusion model (Conditional Denoising Diffusion Probabilistic Model, CDDPM), and the basic idea is to fit the distribution characteristics of a target data domain under a specified condition through forward noise adding and reverse noise removing processes based on a Markov chain.
Specifically, as shown in fig. 1, the method for target resolution imaging of the through-wall radar of the invention comprises the following steps:
s1, acquiring a three-dimensional radar image Iradar and a target depth image Ioptical of a target to be detected positioned behind a wall body;
the method for acquiring the three-dimensional radar image Iradar of the target to be detected comprises the following steps:
S11, as shown in FIG. 2, assuming that a wall body with a thickness d and a dielectric constant E is arranged in the x-axis direction, the antenna and the object to be measured are free spaces in the positive and negative directions of the y-axis respectively except the wall body;
S12, the radar works in a synthetic aperture mode, N receiving and transmitting antenna units with a distance d1 are used for transmitting stepping frequency continuous wave Signals (SFCW) containing M frequency points, and if Q targets to be detected are arranged on the other side of the wall body, the received frequency domain echo signals S (M, N) of the nth receiving and transmitting antenna unit at the mth frequency point are as follows:
where σwall and σq represent the scattering coefficients of the wall and the Q-th object to be measured, q=1, 2, Q, Snoise (m, N) represents the noise signal of the nth transceiving antenna element at the mth frequency point, fm is the frequency of the mth frequency point, τwall represents the double-pass time delay between the N receiving and transmitting antenna units and the wall, and τq,n represents the double-pass time delay between the nth antenna and the Q-th target to be detected;
s13, respectively performing inverse Fourier transform on the frequency domain echo signals S (m, N) received by the N receiving and transmitting antenna units to obtain time domain echoes corresponding to the receiving and transmitting antenna units;
S14, performing BP imaging on the time domain echoes corresponding to the receiving and transmitting antenna units respectively to obtain BP imaging In of the receiving and transmitting antenna units in the target area;
S15, performing coherent superposition on BP imaging In of each receiving and transmitting antenna unit in the target area to obtain a superposition image
S16, in order to improve the quality of the image and reduce grating lobe energy, as shown in fig. 3, the phase coherence factor PCF is adopted to weight the superimposed image, so as to obtain a three-dimensional radar image Iradar=PCF·IBP of the target to be detected.
The calculation method of the phase coherence factor PCF comprises the following steps:
PCF=1-std(e-jφ)
Wherein, theRepresents the calculated standard deviation, j represents the imaginary part, phil represents the phase of the echo signal reflected by the object to be measured to the first transceiver antenna unit, the serial number of the transceiver antenna unit l=0, 1.
Further, the method for acquiring the target depth image Ioptical of the target to be detected comprises the following steps:
According to the perspective principle of light, an optical depth camera is arranged at the center of a transceiver antenna unit array of the radar, and a target depth image Ioptical which is under the same view angle with the radar and is matched with the three-dimensional radar image Iradar is obtained.
S2, adding noise to the target depth image Ioptical time by time until the target depth image Ioptical is converted into a target noise image xT conforming to random Gaussian distribution, wherein T is the total number of noise adding time required when the target depth image Ioptical is converted into the target noise image;
it should be noted that, the method for acquiring the target noise image xT conforming to the random gaussian distribution includes:
wherein x0 is random noise sampled from standard Gaussian distribution when the target depth image Ioptical,∈T is the T noise adding moment,Variance variable at the time of the T-th noise adding, andΑk represents the difference between the noise variances βk and 1 of the random noise ek sampled at the kth noise adding time, and αk=1-βk, k=1, T;
Based on the above, the method for acquiring the target intermediate noise image xt at any noise adding time t can be further provided by the invention:
Where et is the random noise sampled from the standard gaussian distribution at the T-th noise addition time, where t=1, a.t-1,Variance variable at the t-th noise adding time, andΑi represents the difference between the noise variances βi and 1 of the random noise ei sampled at the i-th noise adding time, and αi=1-βi, i=1.
It should be noted that, the essence of step S2 is to establish a forward noise adding process, where the noise adding process belongs to a diffusion process in a conditional noise reduction diffusion probability model, that is, the optical data is subjected to step-by-step noise adding by using a markov chain definition, where a markov diffusion operator q (·) is expressed as follows:
x0 represents the original high-resolution target depth image Ioptical, while presetting a noise variance table beta1,…,βt with a value between 0 and 1 and monotonically increasing, defining alphat=1-βt for convenience of subsequent presentation,Thereby obtaining the diffusion operator as follows:
The invention uses et t=1..t.t represents the random noise sampled from the standard gaussian distribution at each step, i.e. one noise sample is taken at each noise adding instant, and the random noise obtained by each sampling is independent, then the noise-containing image at the T noise adding time can be expressed as
The relation between any time and the image at the initial time can be established by the recursion formula as follows
Through the forward process, the high-resolution target depth image Ioptical is gradually submerged by noise, and finally is converted into random Gaussian distribution.
S3, inputting the three-dimensional radar image Iradar, the moment T and the target noise image xT into a trained noise prediction network to obtain prediction noise;
s4, reversely denoising the target noise image xT by using the predicted noise to obtain a target noise image required by reverse denoising at the next moment;
further, the method for acquiring the target noise image required by the reverse noise reduction at any one of the next moments comprises the following steps:
acquiring the average value muθ(xt, t, c) of the target noise image required by the reverse noise reduction at the next moment:
Wherein t represents the current reverse noise reduction time, fθ(xt, t, c) represents the predicted noise obtained by the current reverse noise reduction time t latest, c represents the three-dimensional radar image Iradar, θ represents the network parameter set of the predicted network, xt represents the target noise image obtained by the current reverse noise reduction time t latest; Is the variance variable at the t-th noise adding time corresponding to the t-th reverse noise reducing time, andΑi represents the difference between the noise variances βi and 1 of the random noise ei sampled at the i-th noise adding time, and αi=1-βi, i=1, t;
Taking the image with the mean value meeting muθ(xt, t and c) and the variance meeting (1-alphat) I as a target noise image xt-1 required for backward noise reduction at the next time t-1.
S5, the current time is decremented by 1 to obtain the next time, and then the three-dimensional radar image Iradar, the next time and the latest obtained target noise image are input into a trained noise prediction network to obtain the prediction noise required by the next reverse noise reduction;
S6, re-executing the steps S4-S5 by using the latest obtained prediction noise and the latest obtained target noise image until T is reduced to 0, and taking the target noise image obtained when the iteration time is 0 as the final imaging of the target to be detected.
For example, when the target noise image xT-1 at the next time of the time T is acquired, inputting the three-dimensional radar image Iradar, the time T and the target noise image xT into a trained noise prediction network to obtain predicted noises fθ(xT, T and c), and then reversely denoising the target noise image xT by using the predicted noises fθ(xT, T and c) to obtain a target noise image xT-1 required for reversely denoising at the next time;
Then, the time T is decremented by 1, at the moment, the three-dimensional radar image Iradar, the time T-1 and the target noise image xT-1 are input into a trained noise prediction network to obtain prediction noise fθ(xT-1, T-1 and c required by the next backward noise reduction, and the target noise image xT-1 is reversely noise-reduced by utilizing the newly obtained prediction noise fθ(xT-1, T-1 and c) to obtain a target noise image xT-2 required by the next backward noise reduction;
And the like until T is reduced by 0, and the image obtained by the last reverse noise reduction is the final imaging of the target to be detected.
It can be seen that steps S3-S6 of the present invention are actually creating a backward denoising process that belongs to the sampling process in the conditional denoising diffusion probability model, as shown in FIG. 4, the backward process is modeled again in Markov chain for the purpose of restoring the high resolution image under condition c (three-dimensional radar image Iradar), and the backward process needs to predict the forward noisy noise unlike the forward process, so the present invention defines a neural network fθ to predict the noise of each step, θ represents the parameters of the network, and then the operator of the present invention defines the backward process as
Where p (xT) is a sample from a standard Gaussian distribution, similar to the forward process, the invention can represent the reverse process operator with mean and variance as follows
And predicting the noise through a noise prediction network fθ, and calculating the mean value of the reconstructed image, namely performing iteration of a reverse process, wherein the calculated reconstructed mean value according to the predicted noise fθ(xt, t, c) is as follows:
the training method of the noise prediction network adopted by the invention is described in detail below, and specifically comprises the following steps:
s31, collecting three-dimensional radar images and target depth images of different targets positioned on the back of a wall body;
S32, acquiring target noise images of all targets according to target depth images of all targets;
S33, inputting the three-dimensional radar image of each target, the total number of the required noise adding moments T when the target depth image of each target is converted into a target noise image, and the target noise image of each target into a trained noise prediction network to obtain prediction noise;
S34, constructing a loss function L according to the prediction noise as follows:
wherein the inverse noise reduction time t=1, the term, T,Represents the predicted noise obtained at the inverse noise reduction time t, c represents the three-dimensional radar image Iradar, theta represents the network parameter set of the prediction network, epsilont represents the random noise obtained by sampling from the standard Gaussian distribution at the t-th noise addition time in the noise addition process of converting the target depth image into the target noise image, x0 represents the target depth image without noise addition, xt represents the target noise image obtained at any inverse noise reduction time t, andVariance variable at the t-th noise adding time, andΑi represents the difference between the noise variances βi and 1 of the random noise ei sampled at the i-th noise adding time, and αi=1-βi, i=1, t; Representing the p-norm; Representing the acquisition of different values of the random noise epsilont by taking the random noise epsilont as a variableEx,c denotes the acquisition desireX represents a target depth image;
S35, judging whether the loss function L is smaller than a set threshold value, if so, obtaining a final noise prediction network, otherwise, entering step S36;
and S36, changing the network parameter value in the network parameter set theta of the noise prediction network, and re-executing the steps S33-S35 by adopting the noise prediction network with the changed network parameter.
It should be noted that, as shown in fig. 5, the noise prediction network of the present invention is a full convolution noise prediction network based on a self-attention mechanism, specifically, a multi-layer two-dimensional convolution module is used to perform feature extraction on radar data and noise-containing images, then feature stitching is performed, a multi-layer residual error module, a downsampling module and an upsampling module are used to perform feature dimension reduction and recovery on the data, meanwhile, a self-attention mechanism module is added in a low-dimensional part to increase the processing weight of an effective part of the data, and the network encodes random time samples and then adds the encoded random time samples into a residual error module to perform fusion processing, so as to finally obtain the prediction noise output with the same size as the optical image.
As shown in fig. 6, when the high-resolution imaging is realized by iterative noise reduction, firstly, a Gaussian white noise image is sampled from normal distribution of zero mean unit variance, the prediction noise output at the current moment is obtained by inputting the image, the radar image as a condition and time coding into a noise prediction network, the image at the previous moment is calculated by parameterized reconstruction, and the previous steps are repeated, so that the high-resolution imaging is iteratively realized.
The application scenario of fig. 7 is taken as an example to describe in detail a through-wall radar target resolution imaging method based on a conditional diffusion model provided by the invention.
Firstly, simulation parameters are set as shown in table 1, the radar works in a synthetic aperture mode, the frequency of a transmitted stepping frequency signal is from 1GHz to 3.5GHz, a frequency point is set every 10MHz, the moving size of an antenna is 1 x 1m, and the stepping of the antenna is 10cm.
Table 1 simulation parameter settings
A data set comprising tables, chairs, people, RPGs is constructed in a simulation scenario.
The invention enumerates an experimental scene shown in fig. 8, and sets experimental parameters as shown in table 2, and uses 10-transmit 10-receive MIMO arrays to transmit 1.7GHz to 2.2GHz step frequency signals, wherein the array size is 40cm x 40cm, the concrete brick wall is 20cm thick, and the target at 2m behind the wall is irradiated in the wall-attaching mode.
Table 2 experimental parameter settings
And constructing a data set containing tables, chairs and people in an experimental scene, and combining the data set with the simulation data set. The data set is divided into a training set and a testing set by establishing diffusion and sampling logic, the training set is used for training a designed network and is used for predicting noise of each step in the sampling process, an image at the previous moment is reconstructed through iterative parameterization, and finally reconstruction of a high-resolution image is realized.
According to the invention, part of simulation and experimental results are selected for analysis, the simulation results are shown in fig. 9, the target is a wooden table, (b) three-dimensional radar data obtained through BP imaging is displayed, (c) the three-dimensional radar data is a front view and a top view section view, the information such as the outline and the type of the target can not be basically distinguished due to lower radar data resolution, and (d) and (e) the high-resolution imaging results and the corresponding true value images of the method are respectively displayed, so that the real outline and texture information of the target are restored to a great extent, and the high-resolution imaging of the target is realized. The actual measurement structure is shown in fig. 10, the target is an iron chair placed behind a wall with the thickness of 20cm, and (b) and (c) show three-dimensional radar images and section views obtained through BP imaging, and as actual measurement is influenced by various complex factors, such as clutter reflected by environmental multipath, the imaging precision of the target is drastically reduced, and through the processing of the proposed method, as shown in (d), the contour information of the target is recovered, so that high-resolution imaging is realized.
In summary, compared with other imaging methods, the method can improve the accuracy of the imaging result of the through-wall radar, realize the recovery of the contour and texture information of the target, and improve the identifiability of the target.
Of course, the present invention is capable of other various embodiments and its several details are capable of modification and variation in light of the present invention by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (7)

wherein the inverse noise reduction time t=1, the term, T,Represents the predicted noise obtained at the inverse noise reduction time t, c represents the three-dimensional radar image Iradar, theta represents the network parameter set of the prediction network, epsilont represents the random noise obtained by sampling from the standard Gaussian distribution at the t-th noise addition time in the noise addition process of converting the target depth image into the target noise image, x0 represents the target depth image without noise addition, xt represents the target noise image obtained at any inverse noise reduction time t, andVariance variable at the t-th noise adding time, andΑi represents the difference between the noise variances βi and 1 of the random noise ei sampled at the i-th noise adding time, and αi=1-βi, i=1, t; Representing the p-norm; Representing the acquisition of different values of the random noise epsilont by taking the random noise epsilont as a variableEx,c denotes the acquisition desireIs not limited to the desired one;
CN202411229838.8A2024-09-032024-09-03 A high-resolution imaging method for through-wall radar targets based on conditional diffusion modelActiveCN119199844B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411229838.8ACN119199844B (en)2024-09-032024-09-03 A high-resolution imaging method for through-wall radar targets based on conditional diffusion model

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411229838.8ACN119199844B (en)2024-09-032024-09-03 A high-resolution imaging method for through-wall radar targets based on conditional diffusion model

Publications (2)

Publication NumberPublication Date
CN119199844A CN119199844A (en)2024-12-27
CN119199844Btrue CN119199844B (en)2025-09-26

Family

ID=94057388

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411229838.8AActiveCN119199844B (en)2024-09-032024-09-03 A high-resolution imaging method for through-wall radar targets based on conditional diffusion model

Country Status (1)

CountryLink
CN (1)CN119199844B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110488222A (en)*2019-08-192019-11-22杭州电子科技大学The UWB localization method that SVM is combined with barycentric coodinates under the conditions of a kind of NLOS
EP3739356A1 (en)*2019-05-122020-11-18Origin Wireless, Inc.Method, apparatus, and system for wireless tracking, scanning and monitoring

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20240202948A1 (en)*2021-07-052024-06-20Shanghaitech UniversityNon-line-of-sight imaging via neural transient field
CN117058009B (en)*2023-06-212025-06-27西北工业大学深圳研究院Full-color sharpening method based on conditional diffusion model
CN117471459A (en)*2023-10-312024-01-30北京理工大学Intelligent reasoning high-resolution imaging method for through-wall radar

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP3739356A1 (en)*2019-05-122020-11-18Origin Wireless, Inc.Method, apparatus, and system for wireless tracking, scanning and monitoring
CN110488222A (en)*2019-08-192019-11-22杭州电子科技大学The UWB localization method that SVM is combined with barycentric coodinates under the conditions of a kind of NLOS

Also Published As

Publication numberPublication date
CN119199844A (en)2024-12-27

Similar Documents

PublicationPublication DateTitle
Qiu et al.Jointly using low-rank and sparsity priors for sparse inverse synthetic aperture radar imaging
Wang et al.TPSSI-Net: Fast and enhanced two-path iterative network for 3D SAR sparse imaging
US8193967B2 (en)Method and system for forming very low noise imagery using pixel classification
Wang et al.Efficient ADMM framework based on functional measurement model for mmW 3-D SAR imaging
CN111551928A (en)Through-wall radar imaging method based on wall low-rank sparse constraint
Zheng et al.Recovering human pose and shape from through-the-wall radar images
Liu et al.Clutter reduction and target tracking in through-the-wall radar
Mao et al.Angular superresolution of real aperture radar using online detect-before-reconstruct framework
Li et al.Supervised contrastive learning for vehicle classification based on the IR-UWB radar
CN109298417B (en)Building internal structure detection method and device based on radar signal processing
CN112835006A (en)Method and system for tracking radar small-target detection on sea based on interframe accumulation
Wen et al.Non-line-of-sight sparse aperture ISAR imaging via a novel detail-aware regularization
CN113702939B (en)Near-field local irradiation target scattering near-far field conversion method
Yang et al.Array Three-Dimensional SAR Imaging via Composite Low-Rank and Sparse Prior.
CN119199844B (en) A high-resolution imaging method for through-wall radar targets based on conditional diffusion model
Xie et al.CNN based joint positioning and pose recognition of concealed human for 3D through-wall imaging radar
CN113567974A (en)Multi-life-body intelligent detection device and method based on CPPWM-MIMO radar
Wang et al.Human Detection in Realistic Through-the-Wall Environments using Raw Radar ADC Data and Parametric Neural Networks
CN117471459A (en)Intelligent reasoning high-resolution imaging method for through-wall radar
CN116125466B (en)Ship personnel hidden threat object carrying detection method and device and electronic equipment
Zhang et al.LRSMTD: Low-rank plus sparse multiple-term decomposition of defocusing target detection for single-channel single-band single-pass VideoSAR
Zhu et al.Multi-angle recognition of vehicles based on carrier-free UWB sensor and deep residual shrinkage learning
Xu et al.Backward projection imaging of through-wall radar based on airspace nonuniform sampling
Chen et al.Joint Localization of LOS and NLOS Targets with Clutter Mitigation via Multipath Exploitation Radar
Wang et al.A Sea Clutter Suppression Method based on Neighborhood Self-supervised for Ship Detection in SAR Images

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp