Disclosure of Invention
Aiming at the problem of poor navigation capability of the fire extinguishing robot in the complex environment in the prior art, the application provides a front-end fire extinguishing robot control method and a system, which are characterized in that a distributed fire detection network is built, the accurate position of the fire is obtained by Bayesian reasoning, the map building, the positioning and the path planning are carried out on the target area according to the obtained position, the closed loop detection is carried out on the planned path, and the like, so that the path navigation capability of the fire extinguishing robot in a complex environment is improved.
The aim of the application is achieved by the following technical scheme.
The application provides a front-end fire extinguishing robot control method, which comprises the steps of arranging a plurality of smoke sensors and infrared cameras in a target area, connecting the smoke sensors and the infrared cameras with a fire extinguishing robot in a wireless communication mode to form a distributed fire detection network, sending positioning information of a fire area to the fire extinguishing robot when the distributed fire detection network monitors fire through a flame identification algorithm, collecting environment information in the target area after the fire extinguishing robot receives the positioning information of the fire area, constructing an SLAM algorithm by using a carried laser radar sensor, constructing a three-dimensional point cloud model of a building in the area, and performing self positioning and navigation path planning by using the constructed three-dimensional point cloud model.
And the fire extinguishing robot navigates to the fire disaster area according to the planned navigation path to extinguish the fire. The fire extinguishing robot senses the environment according to a planned navigation path by utilizing a laser radar and a vision sensor, automatically navigates to a fire area according to local environment information and a target point position through a local path planning algorithm DWA, after the fire area is reached, a visible light camera and an infrared thermal imager on the robot are controlled to collect fire scene images, the collected visible light images and the collected infrared images are registered and fused, a depth learning target detection algorithm YOLOv is utilized to detect a flame area, the detected flame area is positioned in the infrared images, flame pixel coordinates are converted into three-dimensional space coordinates according to internal and external parameters of the infrared imager, fire source position information is obtained, and a fire extinguishing device is controlled to be aligned with a fire source according to the fire source position information to extinguish the fire.
Further, a plurality of smoke sensors and infrared cameras are arranged in a target area and are connected with a fire extinguishing robot in a wireless communication mode to form a distributed fire detection network, the distributed fire detection network comprises the smoke sensors and the infrared cameras which are arranged at different positions of the target area, the smoke sensors are used for detecting smoke concentration, the infrared cameras are used for collecting infrared images, a LoRa or WIFI wireless communication module is integrated on each smoke sensor and each infrared camera, position information of each smoke sensor and each infrared camera is bound with the position information of each sensor node in a system, and a system platform receives data of each sensor node in real time and performs fire analysis and data preprocessing.
Further, when the distributed fire detection network monitors fire through a flame recognition algorithm, positioning information of a fire area is sent to a fire extinguishing robot, the system platform carries out multi-source information fusion on smoke concentration data and infrared image data of all sensor nodes after pretreatment, a Bayesian inference method is adopted to establish a fire probability model, in the fire probability model, the smoke concentration data and the infrared image data are used as observers, probability of fire occurrence is used as hidden variables, posterior probability of fire occurrence under the condition of given observers is calculated through Bayesian inference, the area where the fire occurs is determined according to the sensor nodes with the posterior probability exceeding a threshold value, the position information of the sensor nodes in the determined area is used as primary positioning information of the fire area, and the primary positioning information is sent to the fire extinguishing robot which determines a starting point and an end point of walking and a navigation path according to the position and environment data of the fire extinguishing robot. After reaching the fire disaster area, the fire extinguishing robot can find the final fire starting point according to the self flame detector and perform local navigation
Further, the system platform carries out multi-source information fusion on the smoke concentration data and the infrared image data of each sensor node after pretreatment, a Bayesian inference method is adopted to establish a fire probability model, the system platform comprises the steps of defining each sensor node as one node in a Markov random field, defining observed quantity of the nodes as smoke concentration data and infrared image data collected by corresponding nodes, determining a neighborhood node set of each node according to the distribution of the sensor nodes in space, correlating observed quantity of the nodes with observed quantity of the neighborhood nodes to construct spatial correlation among the nodes, establishing the Markov random field model based on the spatial correlation among the nodes, defining the state of fire as hidden variables of the random field, defining observed quantity of all the nodes as observed variables of the random field, defining an energy function E (x) in the Markov random field, and solving the maximum posterior probability estimation of fire occurrence in the Markov random field by adopting a minimized energy function E (x).
Further, in the Markov random field, an energy function E (x) is defined: Wherein x represents the state of fire, xi represents the state of fire of the ith node, y represents the observed quantity of all nodes, yi represents the observed quantity of the ith node, phii(xi,yi) represents the local evidence function of the ith node, the compatibility of the observed quantity yi and the state of fire xi is measured, the larger the value is, the more the observed quantity supports the state of fire, and the calculation formula is as follows: mu (xi) and sigma (xi) respectively represent observed mean and covariance matrix of the ith node in fire state xi, psiij(xi,yi) represents a spatial correlation function between the ith node and the jth node, the correlation strength of the fire states of adjacent nodes is measured, the larger the value is, the more consistent the fire states of the two nodes are, and the calculation formula is as follows: Wherein d (xi,yj) represents a distance measure between the fire states xi and xj, betaij is a related intensity parameter, Z (y) represents a distribution function for normalizing the probability distribution so that the cumulative probability of the posterior probability P (x|y) is equal to 1, and the calculation formula is as follows:
The energy function comprehensively considers the influence of local observation evidence of the nodes and the spatial correlation among the nodes on fire state estimation, measures the compatibility of the observed quantity and the fire state through a local evidence function phii(xi,yi), measures the correlation strength of the fire state of the adjacent nodes through a spatial correlation function phiij(xi,yi), and normalizes the probability distribution through a distribution function Z (y). Local evidence item when minimizing energy function to solve fire state estimationThe estimation result is matched with the observed data, the spatial correlation term sigmaijψij(xi,xj) enables the estimation result of the adjacent node to be consistent, the two items are mutually restricted and act together, so that the Markov random field model can balance between the observed evidence and the spatial correlation, and the position of the fire can be deduced more accurately. The energy function fully utilizes the spatial correlation characteristic of the sensor data, and improves the robustness and reliability of fire positioning in a complex environment.
Further, the fire extinguishing robot performs checksum optimization on preliminary positioning information to obtain final positioning information of a fire area, and comprises the steps of optimizing the preliminary positioning information by adopting a particle filtering algorithm, initializing a group of particle samples according to position distribution of sensor nodes in the fire area, each particle comprises a fire position, a particle priori weight and node observation data, estimating smoke concentration distribution and infrared radiation distribution at the corresponding particle according to the fire position contained by each particle, comparing the estimated distribution at the particle with observation data acquired by the nodes, calculating likelihood probability of the corresponding particle, multiplying the likelihood probability of the particle by the corresponding priori weight to obtain posterior weight of the particle, resampling the particle sample according to the posterior weight of each particle in the particle sample, retaining particles with posterior weight greater than a threshold value to obtain a resampled particle sample set, iterating the particle posterior weight calculation and the particle resampling on the resampled particle sample set until the distribution or iteration number of the particle sample set reaches the threshold value, weighting the average position of each particle in the converged particle sample set to obtain the final positioning information.
Further, after the fire extinguishing robot receives positioning information of a fire area, environment information in a target area is collected by the aid of a carried laser radar sensor, a SLAM algorithm is used for synchronous positioning and map construction, a three-dimensional point cloud model of a building in the area is built, self positioning and navigation path planning are conducted by the aid of the built three-dimensional point cloud model, the fire extinguishing robot is used for conducting 360-degree scanning on the target area after receiving the positioning information of the fire area, original point cloud data of the building and obstacles in the area are obtained, downsampling is conducted on the original point cloud data to reduce data quantity, ground points are removed by means of a random sampling consistency algorithm RANSAC, preprocessed point cloud data are obtained, the preprocessed point cloud data are input into the synchronous positioning and map construction algorithm, real-time pose of the fire extinguishing robot in the target area is obtained, a three-dimensional point cloud map of the building in the area is built according to the real-time pose, the three-dimensional point cloud map of the building in the area is projected to a two-dimensional plane, the three-dimensional point cloud map is represented in a grid map mode, the current position of the fire extinguishing robot is used as a starting point, the fire extinguishing robot is used for the positioning information, the fire extinguishing robot is used as a starting point, a starting point is built on the map, a map is used for planning a map, and a navigation path is conducted from the starting point of the fire extinguishing robot is used for the starting point to the map is used for the map. The fire extinguishing robot determines a starting point, an end point and a navigation path of walking according to the self position and the environmental data, and after reaching a fire disaster occurrence area, the fire extinguishing robot searches a final fire starting point according to a self flame detector and performs local navigation.
Further, the preprocessed point cloud data is input into a synchronous positioning and map construction algorithm to obtain the real-time pose of the fire extinguishing robot in a target area, and a three-dimensional point cloud map of a building in the area is built according to the real-time pose; inputting a local point cloud map and a robot pose into a synchronous positioning and map construction algorithm, constructing a factor map model by adopting a map optimization-based method, predicting the robot pose according to laser radar data and a robot motion model by adopting an extended Kalman filtering algorithm to obtain a priori estimate of the robot pose, adding the predicted pose as a laser radar odometer factor into the factor map model, constructing a geometrical constraint relation among nodes in the map by taking the laser radar odometer factor, a loop detection factor and the local map factor as edges in the map in the factor map model, judging whether the current position of the robot reaches a previous position by extracting characteristic points containing angular points and plane points in the local point cloud map by adopting a closed loop detection algorithm based on appearance and space geometrical information, adding a closed loop factor in the factor map model if a closed loop is detected, constructing a nonlinear least square optimization problem by taking the constructed factor map model as a basis, solving the optimal pose and characteristic point position of each node in the factor map model, taking the optimized robot pose as a real-time positioning result of the robot pose, converting the local point cloud coordinate of the robot according to the optimized robot pose, and obtaining a three-dimensional point cloud map of the building in the area.
Further, judging whether the current position of the robot reaches the previous position by extracting characteristic points containing angular points and plane points in the local point cloud map and adopting a closed loop detection algorithm based on appearance and space geometric information, if a closed loop is detected, adding a closed loop factor in a factor map model, and corresponding to the current position and pose node, wherein the method comprises the steps of dividing the local point cloud map into cube voxels with the same size, obtaining a characteristic point set of the local point cloud map by extracting the characteristic points with the most angular points or plane points in each voxel as the characteristic points of the corresponding voxels, calculating a geometric descriptor and a visual descriptor of each characteristic point in the characteristic point set, constructing a word bag model according to the geometric descriptor and the visual descriptor of the characteristic points, obtaining a word bag representation vector of the local point cloud map, matching the word bag representation vector of the local point cloud map at the current position and pose node of the robot with the word bag representation vector of the history pose node, and calculating the similarity between the word bag representation vector of the local point cloud map at the pose node, and the pose node, if the similarity is greater than a threshold, judging that the current position of the robot reaches the previous position and the position and pose node, and the pose node is relatively matched with the current position and pose node, and pose node is relatively matched with the current position and pose node.
Another aspect of the present application also provides a front-end fire extinguishing robot control system for performing a front-end fire extinguishing robot control method of the present application.
Compared with the prior art, the application has the advantages that:
According to the application, a distributed fire detection network is formed by utilizing smoke sensors and infrared cameras deployed in a target environment, a fire probability model based on a Markov random field is established through multi-source fusion of smoke concentration data and infrared image data, the local observation evidence of sensor nodes and the spatial correlation among the nodes are fully considered in the probability map model, the Markov random field is solved through Bayesian reasoning to obtain the maximum posterior probability estimation of the fire area position, the defect that a single sensor is easy to be interfered in a complex environment is overcome, and the accuracy and reliability of fire positioning are improved.
According to the application, synchronous positioning and map construction are realized by utilizing the laser radar sensor carried by the fire extinguishing robot through factor graph optimization. The preprocessed point cloud data are input into an SLAM algorithm, a robot pose and a local point cloud map are taken as nodes, a laser radar odometer, loop detection and feature matching are taken as edges, a factor graph model is built, a graph optimization problem is solved, a three-dimensional geometric map of a target area is built on line, and the robot pose is tracked in real time, so that the robot can realize autonomous positioning and map building under an unknown environment. Based on the constructed map, the three-dimensional point cloud is projected to a two-dimensional plane, a cost map is constructed, and then a path search is carried out by utilizing an A-based algorithm, so that the robot can autonomously plan a collision-free navigation path, and obstacle avoidance navigation under a complex environment is realized.
The application provides a closed-loop detection algorithm based on appearance and geometric information, which is characterized in that feature points are extracted from a local point cloud map, bag-of-words representation is constructed, bag-of-words vector similarity of nodes with different poses is compared, whether a robot reaches a previous position is judged, closed-loop factors are added in a factor graph to apply closed-loop constraint to a graph model, and the drift of an odometer is effectively restrained through global optimization. Meanwhile, the loop detection provides a position priori for the factor graph, and the optimized pose and map are more accurate, so that the fire extinguishing robot can maintain stable and accurate navigation for a long time in a complex environment, and the navigation precision is improved.
According to the application, fire detection is realized by means of distributed multi-sensor fusion, and multi-source heterogeneous sensor data such as vision, infrared, radar and the like are fused, so that the sensing capability of the robot to the environment is enhanced. Meanwhile, the automatic positioning and mapping of the robot are realized by combining with an SLAM algorithm, so that the robot can construct a three-dimensional geometric map of a complex environment on line. The environment sensing and the autonomous positioning navigation are combined, so that the fire extinguishing robot can comprehensively sense the complex fire scene environment, and precise autonomous positioning and navigation are realized by utilizing the scene sensing information. The multi-sensor fusion SLAM improves the environmental adaptability of the fire extinguishing robot and enhances the operation capability of the fire extinguishing robot in complex disaster sites.
Detailed Description
Example 1
The method and system provided by the embodiment of the application are described in detail below with reference to the accompanying drawings.
Fig. 1 is an exemplary flowchart of a front-end fire extinguishing robot control method according to some embodiments of the present application, and fig. 2 is a schematic view of an application scenario of a front-end fire extinguishing robot control method according to some embodiments of the present application, where a plurality of smoke sensors and infrared cameras are disposed in a target area, the smoke sensors and the infrared cameras are connected with a fire extinguishing robot by a wireless communication manner to form a distributed fire detection network, when the distributed fire detection network monitors a fire through a flame recognition algorithm, positioning information of the fire area is sent to the fire extinguishing robot, after the fire extinguishing robot receives the positioning information of the fire area, the environment information in the target area is collected by using a laser radar sensor mounted on the fire extinguishing robot, a SLAM algorithm is constructed by using synchronous positioning and mapping, a three-dimensional point cloud model of a building in the area is established, and the fire extinguishing robot performs self positioning and navigation path planning by using the constructed three-dimensional point cloud model, and navigates to the fire extinguishing area according to the planned navigation path.
Fig. 3 is a construction diagram of a fire extinguisher robot according to some embodiments of the present application, and fig. 4 is a construction diagram of another fire extinguisher robot according to some embodiments of the present application, it can be seen from fig. 3 and 4 that the fire extinguisher robot of the present application adopts a wheel-type or crawler-type moving chassis, has good passing ability and maneuverability, a body adopts a modularized design, mainly comprises a sensor module, a fire extinguishing operation module, a control communication module, etc., the sensor module integrates a camera, a thermal infrared imager, a laser radar, etc., for fire monitoring and environmental sensing, the fire extinguishing operation module comprises a fire extinguishing gun and a fire gun, and can spray fine water mist or foam for fire extinguishing, and the control communication module comprises a main control unit, The power management module provides electric energy required by the operation of the robot, distributes and manages power, and has compact and reasonable structure, and the sensor, the actuator and the control unit are properly arranged, thereby being convenient for fire scene operation. The camera collects visible light images in real time, monitors the fire scene environment, recognizes fire characteristics such as flames, dense smoke and the like, the infrared thermal imager collects infrared images, determines the position of a fire source according to a temperature distribution high-temperature area, adopts an image recognition algorithm based on deep learning to extract and classify the fire characteristics, fuses visual detection results and infrared detection results, improves the accuracy of fire judgment, and uploads fire information to a monitoring center in real time or alarms in a mode such as voice, a display screen and the like. The method comprises the steps of scanning the surrounding environment by a laser radar, obtaining point cloud data around the robot, constructing an environment map, adopting a SLAM algorithm, combining odometer data with laser point cloud, realizing autonomous positioning and map building of the robot, conducting path planning on the environment map, generating a navigation path from a current position to a target fire source, conducting path tracking control by adopting a Model Predictive Control (MPC) algorithm, generating control instructions such as speed, steering and the like in real time according to a path curvature and a robot dynamics model, sending the control instructions to a chassis motor, controlling the robot to move, conducting track correction according to laser radar feedback, enabling the robot to navigate to the vicinity of the fire source autonomously, and adjusting a fire extinguishing strategy according to a fire state. The fire extinguishing gun can spray high-pressure water mist, realize temperature reduction and fire extinguishing by gasifying and absorbing heat of the water mist and isolating oxygen, can emit fire extinguishing bomb, release a large amount of fire extinguishing agent to extinguish fire rapidly, adopts visual servo control, adjusts pitch angle and direction angle of the gun and the gun tube according to camera feedback to aim the fire position, realizes continuous spraying of the fire extinguishing agent through a water pump and a drug supply system, judges the fire intensity through infrared imaging, automatically adjusts the spraying pressure and flow of the fire extinguishing agent, and automatically stops spraying and returns to a standby position after the fire is extinguished. The communication antenna adopts a 5G module to support wireless communication with high bandwidth and low time delay, realizes real-time data transmission of the robot and a monitoring center, can be in communication connection with the monitoring center in a satellite communication or ad hoc network mode under the condition of no 5G network coverage, adopts a high-performance industrial personal computer to operate an ROS robot operation system to schedule and control each functional module of the robot, displays the state information, fire parameters and fire extinguishing operation pictures of the robot in real time, is convenient for firefighters to observe, supports a remote control function, can manually control the movement and fire extinguishing of the robot by the firefighters, improves flexibility, and has data recording and storage functions, can perform data acquisition and log recording on the operation process, and is convenient for post analysis.
Specifically, the sensor nodes are arranged, namely, a smoke sensor and an infrared camera are arranged at proper positions in a target area. The smoke sensor is a photoelectric smoke sensor, the smoke concentration is judged by measuring the light intensity attenuation caused by smoke, and the measuring range is 0.1%/m10%/m. The infrared camera selects uncooled focal plane thermal infrared imager, the detection wave band is 814 μm, and the spatial resolution is 320×240. When the sensor is arranged, the environmental characteristics and the monitoring blind areas are required to be fully considered, and the full coverage monitoring of the area is realized by reasonably setting the position of the sensor.
Integration of wireless communication modules Loar wireless communication modules are integrated on each sensor, the working frequency is 2.4GHz, and the transmitting power is 4.5dBm. The communication module is connected with the sensor through a UART interface and is responsible for framing and wirelessly transmitting data acquired by the sensor. Is responsible for setting up and maintaining the entire network. The sensor node data is transmitted to the robot by means of a repeater.
And preprocessing fire data, wherein a processor of the robot reads response data of the sensor nodes from a receiving buffer zone, and judges whether the frame is complete or not according to a length field in the frame header. And then carrying out CRC (cyclic redundancy check) on the data frame to judge whether the data has error codes or not. And if the verification is passed, unpacking the frame load according to the frame type and the data type to obtain smoke concentration data or infrared image data. And filtering the smoke concentration data by adopting a clipping average filtering algorithm. A sliding window is set to store the last n samples. If the new sampling value exceeds the set upper and lower limit threshold values, the abnormal value is judged to be removed, otherwise, the abnormal value is added into a sliding window, and the average value of the data in the window is updated to be used as a filtering result. For infrared image data, a median filtering algorithm is used to remove speckle noise and salt and pepper noise. For the image pixel-by-pixel traversal, the median value of the pixel values in the 5×5 neighborhood around the current pixel is used for replacing the current pixel value, so that isolated noise points can be effectively restrained. And normalizing the filtered smoke concentration data according to a sensor range to obtain a concentration value in the range of [0,1 ]. And extracting statistical characteristics such as mean value, variance, peak value and the like of the concentration data, and taking the statistical characteristics as a basis for judging subsequent fire conditions. The infrared image data is subjected to gray scale normalization, and the original image data is mapped into the range of [0,255 ]. Then, the image contrast is enhanced through a histogram equalization algorithm, and a high-temperature region is highlighted. And extracting characteristic parameters such as the center position of a hot spot, the high-temperature area, the average temperature and the like of the image, and preparing for subsequent fire positioning and temperature estimation.
Establishing a fire probability model, defining a Markov random field, and mapping each sensor node into one node in the Markov random field to form a node set V= {1, 2. Each node i and its neighborhood node set Ni form an undirected graph g= (V, E) of the markov random field, where E is the edge set of the graph, representing the spatial correlation between the nodes. The observed quantity yi of each node i consists of two parts, namely smoke concentration si and infrared image average temperature ti, and is expressed as a two-dimensional characteristic vector. The fire state variable xi is a hidden variable of the node i, and takes a value of 0 or 1 to indicate whether the fire occurs in the node i. The fire states of all nodes constitute the hidden variable sequence x of the random field.
Specifically, in the specific embodiment of the application, a Markov random field is constructed by uniformly arranging 100 sensor nodes in a10 multiplied by 10 grid, wherein the node spacing is 5m, the node set V= {1, 2..the use of the three-dimensional space is realized, 100}, each node i corresponds to a unique grid coordinate (xi,yi), and each node i and 8 surrounding nodes form a neighborhood set Ni, namely Ni={j||xi-xj I is less than or equal to 1 and yi-yj I is less than or equal to 1, and j is not equal to i. The inter-node edge set E contains all adjacent node pairs, and has 360 undirected edges, which represent the spatial correlation between nodes. The neighborhood of node 1 is N1 = {2,11,12}.
The edges connected with the node 1 are { (1, 2), (1, 11), (1, 12) }, and the undirected graph G= (V, E) of the whole random field contains 100 nodes and 360 edges, and describes the spatial topology structure of the sensing network. The observation feature and hidden variable are observation feature vector yi=(si,ti of each node i, wherein si is smoke concentration, ti is infrared image average temperature, the value range of si is [0,1], the normalized value of smoke concentration is represented, 0 represents no smoke, 1 represents thick smoke, the value range of ti is [20,100], the average temperature of infrared images is represented, the unit is the degree centigrade, hidden variable Xi epsilon {0,1} represents whether fire occurs at the node i, 0 represents normal, 1 represents fire, and the hidden variable of the whole random field is X= (X1,x2,......,x100) and represents fire state of all nodes. Probability map model according to Markov assumption, the fire state of each node is only related to the state of its neighborhood nodes. Given the observed features y= { Yi }, the joint probability distribution of the fire state x= { Xi } can be decomposed into: Wherein Z is a normalization factor, a local evidence function of a phi (xi,yi) node i is related to an observation feature yi, and phi (xi,xj) is a spatial correlation function between the nodes i and j and reflects the state correlation of a neighborhood node. The local evidence function may be defined as: Where w1 and w2 are weighting coefficients for smoke concentration and temperature characteristics, which can be set according to a priori knowledge. The spatial correlation function may be defined as an Ising model: where β is a spatially-dependent intensity parameter, β >0 indicates that the neighborhood nodes tend to take the same state.
Model inference-given the observation feature Y, the posterior probability distribution P (x|y) of the hidden variable X may be inferred, a bayesian approach may be used, the prior probability P (X) may be assumed to be a uniform distribution, and the likelihood function P (y|x) may be constructed from a local evidence function phi (Xi,yi). Because of the complex structure of the Markov random field, the posterior probability P (X|Y) is difficult to solve explicitly, an approximate inference method is needed, common inference algorithms comprise meanfield approximations, loopybeliefpropagation, and the like, and probability estimation is realized through iterative message passing. Taking loopyBP algorithm as an example, each node i maintains an edge distribution bi(xi), representing the probability that node i is in state xi, in each iteration node i receives message mj→i(xi from neighbor node j), and updates its edge distribution: After iterating for several rounds, the edge distribution bi(xi) converges, and the fire probability of each node i can be obtained. The decision and early warning are that according to the edge distribution bi(xi), the posterior probability P (xi =1|Y) of each node i on fire can be obtained, a fire early warning threshold delta=0.7 is set, if P (xi =1|Y) > delta, the fire of the node i is judged, otherwise, the node fire probability at a certain moment is considered as follows, node 1:P (x1 =1|Y) =0.82, node 2:P (x2 =1|Y) =0.35, node 3:P (x3 =1|Y) =0.91, the number of the fire nodes A= {1,3} can be obtained, and the node position in A is reported to a monitoring center for fire early warning.
And (3) dividing the node neighborhood, namely carrying out triangulation connection on the nodes by adopting a Delaunay triangulation algorithm according to the space coordinates (xi,yi,zi) of the sensor nodes. After triangularization, three vertices of each triangle are neighborhood nodes to form a neighborhood set Ni of the node i. Creating an adjacency matrixIf node j is a neighbor node of i, aij =1, otherwise aij =0. The adjacency matrix represents the connection relation of the nodes in fig. G.
Defining observed quantity condition distribution, namely for each node i, observed quantity thereofObeying the Gaussian distribution under the condition of the fire state xi, the probability density function is f (yi|xi)=Ν(yi|μ(xi),Σ(xi)), wherein mu (xi) and sigma (xi) are respectively mean vectors and covariance matrixes under the state xi, and the mean vectors and covariance matrixes can be obtained through statistics of historical data: wherein,The mean value and variance of smoke concentration and temperature when fire does not occur,Is the corresponding parameter when the fire happens, and ρ is the correlation coefficient of smoke concentration and temperature.
The prior distribution of fire states is defined, namely assuming that the prior probabilities of the fire states of all nodes are the same, the prior distribution is subjected to Bernoulli distribution with a parameter of pi, namely P (xi=1)=π,P(xi =0) =1-pi, wherein pi can be set according to the fire risk level of the environment, and the probability of fire occurrence is 1% as shown by pi=0.01. Defining the joint distribution of Markov random fields the joint distribution P (x, y) of Markov random fields can be decomposed into the product form of a potential function according to the HAMMERSLEY-Clifford theorem: Where Z is the partitioning function, phii(xi,yi) is the local potential function of node i, phiij(xi,xj) is the interaction potential function of nodes i and j. Solving a probability model, substituting a local potential function phii(xi,yi) and an interaction potential function phiij(xi,xj) into an expression of an energy function E (x), and obtaining the following formula: Where Z is a partitioning function, is a normalization constant such that the sum of the probability distributions P (x) is 1. Since Z is independent of x, it can be ignored when solving for the optimal state x. Thus, minimizing E (x) is equivalent to maximizing
Gibbs sampling algorithm Gibbs sampling approximates the posterior distribution P (x|y) by updating the state of each node one by one to generate a sample sequence of Markov random fields. The specific sampling steps are as follows, randomly initializing state of hidden variableLet iteration number t=0. For the t-th iteration, the state of each node i is updated in turnThe local energy of node i in two states xi=0,xi =1 is calculated: calculating the state transition probability of the node i according to the energy value: wherein,The states of nodes other than node i at time t are shown. From Bernoulli distributionMid-sample generationThe state of node i is updated. Repeating until the states of all nodes are updated, and obtaining a complete state sampleRepeating the preset iteration times (such as 1000 times) to obtain a set { X1,X2,......,XT } of state samples. Estimating the edge probability of the node i according to the sampled state sequence: wherein I (·) is an oscillography function, and T is the number of iterations. P (xi=0|y)=1-P(xi = 1|y).
And (3) positioning a fire area, namely setting a threshold lambda epsilon (0, 1) according to the posterior probability P (xi = 1|y) estimated by Gibbs sampling, and judging whether the fire occurs in the node i, wherein if P (xi = 1|y) > lambda, the node i judges that the fire occurs in the node i, and the other fire judges that the fire occurs in the node i. And adding the position coordinates (xi,yi,zi) of all fire nodes into the set S, wherein S is the estimated fire area. And obtaining the Zigbee short address of each node in the fire node set S by inquiring the position-address mapping table. And sending a fire confirmation frame to the fire node, and requesting the fire node to reply to the confirmation frame. Nodes which do not receive the response are removed from S. Updating the node list of the poller, deleting the node address in S from the conventional polling list, and adding the node address into the fire polling list. According to the size of the fire area, the query frequency of the nodes in the fire polling list is dynamically adjusted to realize key monitoring.
Particle filtering algorithm, namely initializing particles, namely randomly spreading N particles in a fire area according to the size and shape of the fire area, wherein each particle comprises the following information of a particle number i which is more than or equal to 1 and less than or equal to N, and a fire position coordinate Pi=[xi,yi,zi. A priori weights: Node observations Di={d1i,d2i,......,dni, whereIs the smoke concentration and infrared temperature observations of node j at particle i. Particle weight update for each particle i, the smoke concentration sji and the infrared temperature tji at each node are estimated from its fire position pi. The estimated observation data Di' at particle i can be obtained by estimating using a gaussian smoke diffusion model and a radiation transmission model. The estimated observation data Di' is compared with the actual observation data Di, and likelihood probabilities are calculated: wherein,AndIs the observed noise variance of smoke concentration and temperature. Multiplying the likelihood probability with the prior weight to obtain the un-normalized posterior weight of the particle i, wherein wi'=Pi×wi is the posterior weight of all particles, and normalizing:
Particle resampling, namely resampling N new particles from N particles by adopting a roulette method according to the normalized posterior weight wi. Particles with higher posterior weights are sampled multiple times, while particles with lower weights may be eliminated, such that the particle samples are concentrated in areas with higher posterior probabilities. And after resampling, enabling the weight of all particles to be 1/N, and obtaining a resampled particle set. Iterative updating, namely repeatedly and continuously updating the particle set until one of the following stopping conditions is met, namely that the distribution of the particle set converges, namely that the weight of the particle with the largest posterior weight exceeds a threshold value (such as 0.9). The number of iterations reaches a preset maximum (e.g., 100). Fire positioning, namely, carrying out weighted average on the coordinates of the fire position of the converged particle set according to the posterior weight of the particle to obtain the maximum posterior probability estimation of the fire position: and transmitting the p as the final positioning coordinates of the fire area to a navigation module of the fire extinguishing robot.
And after the fire extinguishing robot receives the positioning coordinates [ x, y and z ] of the fire area, setting the fire extinguishing robot as a target point p. The robot chassis is controlled to rotate so that the orientation of the airborne lidar is aligned with the p-x direction. The laser radar is started to perform 360-degree rotary scanning, the scanning frequency is fs (such as 10 Hz), and the scanning angle resolution is delta theta (such as 0.1 degree). And each time the laser radar finishes scanning, the acquired scanning point set Di={p1,p2,......,pn is sent to a point cloud processing module. The point cloud processing module receives the scanning point set Di, extracts the three-dimensional coordinates [ xj,yj,zj ] and the reflection intensity value ij of each scanning point Pj, constructs original point cloud data Pi:Pi={[x1,y1,z1,i1],[x2,y2,z2,i2],......,[xn,yn,zn,in]};, and accumulates the original point cloud data Pi obtained by each scanning to obtain complete environment point cloud data P: Where m is the number of scans, m=360°/Δθ. And (3) point cloud downsampling, namely performing voxel rasterization downsampling on the original point cloud P, and reducing the data quantity of the point cloud. The three-dimensional space is divided into a grid of cubes of dimensions d x d, d being the voxel side length (e.g. 0.1 m). Traversing each point Pj in the point cloud P, calculating the voxel number [ i, j, k ] to which it belongs: for each non-empty voxel, the coordinate average of all points within the voxel is calculated as the representative point of that voxel.
And constructing down-sampled point cloud data Qd:Qd={q1,q2,......,ql by using all voxel representative points, wherein l is a non-null pixel number, and l is less than or equal to n. After downsampling, the point cloud density is greatly reduced, and the data size is reduced. And removing ground points, namely performing plane fitting on the downsampled point cloud Qd by using a RANSAC algorithm, and extracting the ground point cloud Qg. 3 points in Qd are randomly selected, a plane equation defined by the 3 points is calculated, ax+by+cz+d=0, and the distances from all points in Qd to the plane are calculated, and the number ni of points with a statistical distance less than a threshold t (e.g., 0.05 m) is calculated. Repeating for several times (such as 100 times), and finding out the plane with the largest n_i as the ground plane. Points in Qd which are less than t from the ground plane are marked as ground points, and a ground point cloud Qg is formed. And eliminating the ground points from the Qd to obtain a non-ground point cloud Q:Q=Qd-Qg, so as to finish the acquisition and pretreatment of the laser radar data, obtain the non-ground point cloud Q representing the environmental obstacle, and prepare for subsequent positioning and mapping. The preprocessing link adopts a method combining voxel rasterization and RANSAC plane fitting, reduces the density of point cloud, effectively filters out a large number of redundant ground points, reduces the data volume and highlights the geometric features of the obstacle.
And (3) synchronous positioning and map construction, namely initializing the pose (position and direction) of the fire extinguishing robot to be x0 = [0, 0], and setting the pose as the origin of a coordinate system. According to the odometer data of the robot, predicting an priori estimated value xt' of the pose xt of the robot at the moment t, and establishing an odometer factor based on a motion model: wherein ut is the robot control quantity from t-1 to t, the index is the operation symbol on the plum group SE (3), and I Sigma represents the Markov distance. And carrying out coordinate transformation on the preprocessed point cloud Q to obtain a local point cloud map L:Lt=T(xt) Q taking xt as a reference system, wherein T (·) is a coordinate transformation matrix on SE (3). And extracting the corner points and plane point characteristics in the Lt, and judging whether the current pose xt reaches the previous position xj by adopting a closed loop detection algorithm based on appearance and geometric information.
If a closed loop is detected, adding a closed loop factor in the factor graph: Wherein Δt is the relative pose transformation at closed loop. Based on the local map Lt, extracting stable corner points and plane point characteristics, and constructing local map factors: Where pk is a feature point in Lt and qk is a corresponding global map point. Integrating the odometer factor, the closed loop factor and the local map factor, and constructing a factor graph optimization problem: And solving the nonlinear least square problem to obtain the optimal estimated value of the pose and the characteristic points of the robot. And according to the optimized robot pose, transforming the local point cloud map Lt into a global coordinate system, and adding the map Lt into the global map to realize online map construction.
And planning a path according to the A-search algorithm, and rasterizing a map, namely converting the global point cloud map into a two-dimensional occupied grid map, so that path searching is facilitated. The xy plane is divided into square grids of size d x d, d being the grid side length (e.g. 0.2 m). Traversing each point pi in the global point cloud map, and calculating the grid number [ m, n ] to which the point pi belongs: Counting the number of points in each grid, marking the grids with the number of points larger than a threshold to (such as 5) as occupied grids, marking the grids with the number of points smaller than tf (such as 1) as free grids, and remaining unknown grids. The value of the occupied grid is set to 1, the free grid is set to 0, the unknown grid is set to 0.5, a two-dimensional occupied grid map M is constructed,
And (3) searching by an algorithm, namely, taking grids corresponding to the current position xt of the robot and the target point p as a starting point s and an end point g, and executing the search A on the occupied grid map M. The start point s is added to the OPEN list OPEN and the cost of s is initialized to 0. The steps of fetching the node n with the smallest cost from OPEN and adding it to the CLOSED list until OPEN is empty or the end point g is found are circularly performed. If n is the end point g, the loop is jumped out, and the search is successful. Extension node n's 8 neighboring nodes { ni }, i=1, 2. For each neighboring node ni, its cost value f (ni):f(ni)=g(ni)+h(ni) is calculated, where g (ni) is the actual cost from the start point s to ni, and h (ni) is the estimated cost from ni to the end point g, where Euclidean distance is used. If ni is in CLOSED, skipping, otherwise, if ni is not in OPEN, adding it into OPEN and setting n as its parent node, if ni is in OPEN, comparing its original cost value with the newly calculated cost value, taking the smaller one and updating its parent node. If the search is successful, then starting from the end point g, tracing back along the parent pointer, generating the optimal path { p0,p1,......,pk } from s to g, where p0=s,pk = g.
And (3) path optimization and motion control, namely carrying out local optimization on the global path { pi } obtained by searching A so as to ensure that the global path { pi } meets the kinematic and dynamic constraints of the robot. And carrying out parameterized interpolation on the path nodes { pi } by adopting a cubic spline curve to generate a smooth reference track r (t). Based on the reference track r (t), a track tracking control algorithm is adopted to calculate the speed v (t) and the acceleration a (t) of the robot in real time: Where vmax and amax are the maximum speed and acceleration of the robot, kv and ka are the tracking error attenuation coefficients, and p (t) the real-time position of the robot. And issuing the calculated speed and acceleration control instruction to a robot chassis to control the robot to move along the reference track. In the motion process, laser radar data acquisition and positioning mapping are continuously executed, and the pose xt of the robot and the global point cloud map are updated. When the robot reaches the vicinity of the target point p (e.g., within 0.5 m), the movement is stopped, and the current pose xt of the robot is sent to the fire extinguishing control module to request the fire extinguishing operation to start.
And (3) a synchronous positioning and map construction algorithm, namely constructing a factor graph model, namely marking the pose of the robot at the time t as xt, and marking the local point cloud map as Lt. According to a motion model of the robot and the pose xt-1 at the time t-1, an Extended Kalman Filter (EKF) algorithm is adopted to predict xt to obtain a priori estimated valueWill beAs lidar odometer factor f0(xt-1,xt) to the factor graph, representing the robot motion constraints at times t-1 to t: Wherein ut is the robot control quantity from t-1 to t, is the operation symbol on SE (3) group,Representing the mahalanobis distance. Extracting the angular points and plane point characteristics in Lt, and constructing local map factorsWherein pk is a feature point in Lt, qk is a corresponding feature point in the global map, and T (·) is a coordinate transformation on SE (3). The local map factor fl(xt,Lt) is added to the factor graph, representing the constraint relationship between the robot pose and the local map.
And (3) closed loop detection, namely dividing the Lt into cube voxels with the same size, and extracting the point with the largest characteristic point in each voxel as the representative point of the voxel to obtain a characteristic point set Pt of the Lt. For each feature point Pi in Pt, a geometric descriptor gi and a visual descriptor vi:gi=[xi,yi,zi,αi,βi,γi];vi=ORB(pi); of the feature point Pi are calculated, wherein xi,yi,zi is the three-dimensional coordinate of Pi, alphai,βi,γi is the normal vector direction angle of Pi, and ORB (·) is an ORB feature extraction algorithm. And constructing a bag-of-words model by using descriptors of all feature points in P_t to obtain a bag-of-words expression vector wt of Lt. Calculating the similarity of wt to the bag-of-words representation vector { wj } of the local map at the historical pose node: And selecting a historical pose node xm with highest similarity and larger than a threshold value tau, and judging that the position of the robot at the moment t reaches the position at the moment m, namely detecting a closed loop. The feature points in Lt and Lm are matched, and the relative pose transformation Δtmt between xt and xm is calculated. Addition of closed loop factors in factor graphsWhere T (xt)-1) is the inverse transform on SE (3), Σc is the covariance matrix of the closed-loop error.
Solving the factor graph optimization problem by adopting a nonlinear least square method to construct a least square problem, wherein all factors are expressed as residual blocks ri(xt,pk): rl(xt,pk)=T(xt)pk-qk;rc(xt,xm)=T(xt)-1T(xm)-ΔTmt; Taking the square norm of the residual block as an objective function, constructing a least square problem: Wherein Σi is the information matrix of the i-th residual block, which is used to represent the uncertainty of this residual block.
LM algorithm solving, namely initializing estimated values of the pose xt and the characteristic point pk of the robot and recording the estimated values asAndCalculating residual vector under current estimated valueSum jacobian matrixIteratively performing the steps of calculating an incremental equation (JTΣ-1J+μI)Δx=-JTΣ-1 r) until a maximum number of iterations is reached or a convergence condition is met, wherein,For the increment to be solved, μ is the damping factor of the LM algorithm, I is the identity matrix. And solving an increment equation to obtain deltax. Updating the estimated value: wherein,Is an exponential mapping operation on the lie algebra. Calculating residual error under new estimated valueAnd adjusting the size of the damping factor mu according to the change condition of the residual error, wherein if the damping factor mu is ri+1||<||ri, the damping factor mu is reduced so that the LM algorithm is more similar to the Gaussian Newton method, and if the damping factor mu is not the same, the damping factor mu is increased so that the LM algorithm is more similar to the gradient descent method. If the value of Deltax is smaller than epsilon or ri+1 is smaller than epsilon, the algorithm is considered to be converged, the iteration is jumped out, and if not, the next iteration is continued.
Map updating, namely, the pose of the robot obtained by the last iterationAs the optimal estimate. According toCalculating pose of robot under global coordinate systemWherein R (·) and t (·) represent the rotation matrix and translation vector, respectively, in SE (3). Transforming all point clouds Pt={pi in L (t) into the global coordinate system: the point cloud in Pt' is fused into the global map, and the octree and voxel occupancy of the map are updated. Outputting the optimized robot poseAnd an updated global map. By constructing the factor graph optimization problem, various constraints are integrated into a least square problem, and the LM algorithm is used for carrying out iterative solution, so that the convergence is ensured, and the optimization efficiency is improved. The LM algorithm adaptively adjusts between the Gaussian Newton method and the gradient descent method by introducing damping factors, overcomes the defects of the two methods, and is an effective nonlinear optimization algorithm. In each iteration, the coordinate transformation and fusion are carried out on the local map according to the robot pose obtained through optimization, so that the map is updated in real time.
Real-time positioning, namely optimizing and solving the optimal pose estimated value of the robot at the time t through a factor graphWill beAs real-time poses of the robot in the global coordinate system, position coordinates (x, y, z) and attitude angles (roll, pitch, yaw) are included. And releasing the global pose of the robot to other modules, such as path planning, control and the like, for use by the modules in real time. Global point cloud map construction, namely, according to the optimized robot poseConstructing a transformation matrix T:
And (3) carrying out coordinate transformation on each point pi = (x ', y', z ') in the local point cloud map Lt to obtain a point pi'=(x",y",z"):pi'=T×pi under the global coordinate system, and splicing the transformed point cloud { pi' } into the global point cloud map G. And (3) carrying out voxel downsampling on G, setting the side length of the voxel as d (e.g. 0.1 m), merging the point cloud in each d multiplied by d voxel into one point, and reducing the data volume of the point cloud. And smoothing the down-sampled point cloud by adopting a moving least square method to remove noise and outliers, namely selecting a neighbor point set { qj } with the radius of r (e.g. 0.5 m) for each point pi. The local tangent plane of { qj } was fitted using the least squares method to give the normal vector ni at pi. Pi is projected in the direction of ni to yield smoothed point pi'. And taking the smoothed global point cloud map G as a three-dimensional model of the building.
And (3) path planning, namely projecting the global point cloud map G to an xy plane to obtain a two-dimensional occupied grid map M. And constructing a cost map C, wherein the size and the resolution of C and M are the same, and the initial value is 0 by taking the current position of the robot as a starting point s and the fire area positioning information as a target point g. For each grid Ci in C, if Ci is the occupied grid, then Ci = infinity, if Ci is the starting point s or the target point g, then Ci =0, otherwise, Cij=d(ci,s)+d(ci, g), where d (·) is the euclidean distance. On the cost map C, an optimal path from s to g is planned by adopting an A-search algorithm, namely an OPEN list OPEN is initialized and a CLOSED list CLOSED is empty. S is added to OPEN, and f(s) =0. The steps of taking out the grid n with the minimum f value from OPEN and adding CLOSED are circularly executed until OPEN is empty or g is reached. If n is g, the planning is successful, and the search is terminated. An 8-neighborhood grid of n is expanded { n_i }, g (ni)=g(n)+d(n,ni), and h (ni)=d(ni, g) are calculated. For each ni, if ni is in CLOSED, skip, if ni is not in OPEN, add OPEN and set f (ni)=g(ni)+h(ni), if ni is in OPEN and new g (ni) is less than the original value, update f (ni) and parent pointer. If the planning is successful, backtracking the parent node pointer from g to obtain an optimal path { p0,p1,.......,pk } from s to g. And converting { pi } into a path point under the coordinate system of the odometer of the robot as a target path of navigation.
And the robot controls the chassis motor to move along the path from the main body to the fire area according to the navigation path obtained by the path planning module during fire extinguishing operation in the fire area. In the moving process, the heading deviation and the transverse deviation are calculated by real-time positioning through sensors such as an odometer, a laser radar and the like and comparing with a navigation path. And according to the deviation, adopting methods such as PID control or model predictive control and the like to adjust the linear speed and the angular speed of the robot so as to realize path tracking control. After reaching the fire area, the robot stops moving and enters a fire extinguishing operation mode.
And (3) selecting a fire evaluation and extinguishing strategy, namely acquiring multi-source data of a fire area through a carried flame sensor, a thermal infrared imager, a visible light camera and the like. And carrying out fusion processing on the acquired data, and extracting fire characteristics such as flame area, temperature distribution, smoke concentration and the like. Based on the fire characteristics, the severity of the fire and the type of fire (e.g., class A, class B, class C fires) are evaluated. In combination with environmental conditions (e.g., space size, ventilation conditions, combustible distribution, etc.), appropriate fire extinguishing media and fire extinguishing strategies are selected. The kind of fire extinguishing medium, the spraying parameters (such as pressure, flow rate, spraying time, etc.) and the fire extinguishing path are preset for different fire types and severity.
And (3) autonomous fire extinguishing operation, namely controlling a fire extinguishing device according to a selected fire extinguishing strategy, and spraying fire extinguishing medium to a fire point and a fire spreading area. The variable frequency speed regulation technology is adopted to automatically regulate the pressure and flow of the fire extinguishing pump, so that the spraying of the fire extinguishing medium achieves the optimal effect. The robot can autonomously plan a fire extinguishing path, such as a spiral shape, a Z shape and the like, according to the fire distribution and the environmental characteristics, so as to realize full coverage injection of a fire scene. In the fire extinguishing process, residual fire is monitored in real time through a flame sensor, and the fire extinguishing effect is evaluated. If the residual fire is found, the robot automatically adjusts the fire extinguishing path and the spraying parameters, and carries out supplementary spraying on the residual fire until the fire is completely extinguished.
And a plurality of fire extinguishing robots can be deployed on a large fire scene to perform collaborative operation, so that the fire extinguishing efficiency is improved. Each robot forms a distributed sensing network through a wireless communication module (such as 5G, wi-Fi, loar and the like), and interacts fire and environment information. A main control node is arranged in the network, the detection data of all robots are summarized, a global fire map is generated, and the global fire map is distributed to each robot. And each robot autonomously distributes fire extinguishing areas and tasks according to the global fire map, coordinates actions and avoids repeated spraying and blind area omission. In the fire extinguishing process, the robots share fire extinguishing progress and residual fire information in real time, and a fire extinguishing strategy is dynamically optimized, so that efficient coordination is realized.
The fire extinguishing medium is selected and used for different types of fires, and the fire extinguishing medium is selected as a.A types of fires (common solid matter fires) mainly using water or water-based fire extinguishing agents. b.B type fires (flammable liquid fires) using foam extinguishing agents, dry powder extinguishing agents or haloalkane extinguishing agents (such as heptafluoropropane). c.C fire (electric fire) by cutting off power supply and then using carbon dioxide, haloalkane or dry powder fire extinguishing agent. According to the space size and the structural characteristics of the fire disaster occurrence area, proper fire extinguishing medium reserves and spraying modes are selected. In a narrow space, a small-sized and short-spraying-distance water mist or aerosol fire extinguishing agent is preferably used. In the open area, a large-flow water cannon or a foam cannon can be used to realize long-distance and large-range injection.
Example 2
The fire detection network and the fire extinguishing robot are deployed in a warehouse area with the area of 100m multiplied by 100m, and the specific parameters are distributed fire detection network parameters, the number of smoke sensors is 25, the distribution interval is 20m, the sampling frequency is 1Hz, the measuring range is 0-10%/ft, the number of infrared cameras is 25, the distribution interval is 20m, the sampling frequency is 0.5Hz, the wavelength is 8-14 mu m, the Zigbee communication frequency is 2.4GHz, the transmitting power is 20dBm, and the communication distance is 150m. The sensor node location map is exemplified by node 1: (-40 ) < - > Addr:0x8e5f, node 2: (-20, -40) < - > Addr:0x3a2c, and robotic polling period: 10s.
The multi-source information is fused with fire positioning, the probability of detecting the primary fire at the positions (20, 30) is 0.95, and the probability exceeds the Bayesian inference threshold value by 0.8, and a fire confirmation request is triggered. The confirmation request is sent to the (-20, 20), (0, 20), (20, 20) three adjacent nodes, and the smoke concentration observation data returned by the received nodes are 1 (-20, 20) 1.6% of the node, 2 (0, 20) 2.1% of the node and 3 (20, 20) 3.5% of the node. From the observations, parameters in the markov random field energy function are estimated, local evidence terms μ1=1.6%, μ2=2.1%, μ3=3.5%, Σ1=0.64, Σ2=0.81, Σ3=1.44. The spatial correlation term is beta=1.5, d12=20, d13=40, d23=20, and a Markov random field energy function is constructed by E (x) =Σ { log [ N (f (xi) |μi, Σi) ] } beta- Σ { exp [ - |xi-xj|/d ] }, wherein x is a hidden variable fire probability, f (·) is an observation equation, and|·|is a Euclidean distance. The method comprises the steps of carrying out minimum solution on an energy function E (x) by using a particle filtering algorithm, estimating posterior distribution of fire probability, namely initializing 500 particles, randomly distributing the particles in a 40 m-by-40 m area around three nodes, repeatedly iterating 50 times, namely predicting the particles according to a particle set { x (i) t-1} at the previous moment to obtain { x-by-i (i) t }, updating the particles, namely calculating weight w (i) t=p (yt|x-by (i) t of each particle x-by-i), resampling, namely resampling according to weight w (i) t from { x-by-i) t } if Neff=1/Σ (w (i) t)/(2) t <250, outputting estimation, namely x (i) t=Σ { w (i) t-by-t }, iteratively solving, and obtaining convergent particle set distribution, namely calculating probability of 0.13 in the area around the node 1 (+ -5 m 65 →0.13, probability of 2 in the area around the node (+ -175 m) →0.35, and taking the probability of the largest subset around the node (260.35) as a final probability of the largest probability of the area around the node (25).
SLAM positioning navigation, laser radar data acquisition, wherein the laser radar model is VelodyneVLP-16, the vertical direction is 16 lines, the horizontal scanning is performed at 360 degrees, the scanning frequency is 40Hz, namely 40 frames of point cloud data are generated per second, each frame of point cloud comprises 57600 measuring points, the angular resolution is 0.25 degrees, the horizontal scanning angle of each frame is 360 degrees/0.25 degrees=1440 points, the robot moving speed is 0.5m/s, the data acquisition frequency is 2Hz, namely 1 frame of data is recorded every 0.5m, the total length of a moving track is 45m, and 45 m/(0.5 m/frame) =90 frames of laser data are recorded.
The point cloud data preprocessing comprises the steps of carrying out voxel grid filtering on each frame of point cloud, setting the leaf size of the voxel grid to be 5cm, reducing the size of each frame of point cloud to 1/8 of the original point cloud after filtering to about 7200 points, carrying out plane fitting by using a RANSAC algorithm, setting the distance threshold value from the set point to the plane to be 10cm, assuming that 25% of points in each frame of point cloud are ground points, carrying out 50 iterations to better fit the ground, and only preserving vertical features such as walls, columns and the like of each frame of point cloud after removing the ground points, wherein about 5400 points are reserved.
The factor graph construction comprises the steps of adding a key frame node every 5 frames, namely a distance of 2.5m, generating 18 key frames in total through a 45m track, calculating relative pose transformation between the key frames through a laser odometer to be used as an observation value of an odometer factor, assuming that the measurement noise of the laser odometer is translational by 5cm and rotated by 0.02rad to be used for constructing a covariance matrix, finding a closed loop through loop detection on the 5 th, 10 th and 15 th frames, forming a closed loop factor with frames (1, 4 th and 7 th frames) before 3 moments respectively, calculating the observation value of the closed loop factor through ICP registration, setting a distance threshold value at the time of registration to be 20cm and setting the maximum iteration number to be 100, and after ICP registration, averaging the pose transformation error at the closed loop to be about translational by 8cm and rotated by 0.05rad. Data examples are 1 st frame point cloud (x, y, z): (-2.1,0.3,1.2), (-1.8,0.5,0.9),. The.once.a., (3.7, -2.4, -0.6). The 5 th frame point cloud (x, y, z): (1.2, -0.8,1.5), (0.9, -1.1,1.8), the. Frame 1-5 odometer factor (dx, dy, dθ): (2.3,0.1,0.05), (2.4, -0.2, -0.02), (2.5,0.3,0.11), (2.2,0.0, -0.07). Frame 5 and frame 1 ICP closed loop factors (dx, dy, dθ): (9.8, -1.4, -0.12).
The loop detection comprises the steps of extracting 200 SHOT feature points from 18 key frame point clouds, wherein each feature point comprises 352-dimensional SHOT descriptors, for example, the SHOT feature point set of the 1 st frame is { s1_1, s1_2, }, s1_200} and has dimensions of 200X 352, carrying out K-means clustering on the SHOT feature points of all key frames to generate 100 visual words, quantifying the feature points of each key frame into 100-dimensional word bag vectors, representing the number of the feature points belonging to each visual word, for example, the word bag vectors of the 1 st frame are {15,8,3, & gt...
Factor graph optimization, namely constructing a factor graph comprising 18 key frame pose nodes, 17 milemeter factors, 3 closed loop factors (5-1, 10-4 and 15-7), wherein observed values of the milemeter factors are inter-frame relative poses, measurement covariance is diagonal matrix diag (0.05A 2, 0.05A 2 and 0.02A 2), observed values of the closed loop factors are obtained through ICP registration, measurement covariance is set to diag (0.08A 2, 0.08A 2 and 0.05A 2), the observed value of the 1 st closed loop factor is (-9.85,1.38,0.14) to represent pose transformation of the 5 th frame relative to the 1 st frame, and graph optimization is carried out by using g2o, wherein the optimized variables are 18 key frame poses, and the total 6×18=108 degrees of freedom.
The minimum error function :E(x)=Σ{(z_odom-h_odom(x))^T·Σ_odom^(-1)·(z_odom-h_odom(x))}+Σ{(z_loop-h_loop(x))^T·Σ_loop^(-1)·(z_loop-h_loop(x))}; is used for obtaining the maximum posterior estimation of the pose of each key frame after iterative optimization, wherein x is an optimized variable, z is an observed value, h is an observed equation, Σ is a measurement covariance, and the pose change amounts before and after optimization are as follows, namely the 1 st frame pose change amount (-0.08,0.12, -0.03), the 2 nd frame pose change amount (-0.15,0.21, -0.06), the 18 th frame pose change amount (0.32, -0.45,0.12).
The global map construction comprises the steps of transforming laser point clouds of 18 key frames into a global coordinate system according to the optimized pose, splicing into a unified dense point cloud map, wherein the point cloud level of the map is about 100 ten thousand points, downsampling and voxel grid filtering are carried out on the map point clouds to obtain a sparse point cloud map with 10cm resolution, the sparse point clouds are projected to a horizontal plane, and a two-dimensional occupied grid map is generated, and the occupied grid resolution is 10cm. Each grid in the occupancy grid map is assigned an occupancy probability, e.g., grid (25,18) 0.92, grid (25,19) 0.45, grids (26, 18) 0.68, and grids with occupancy probabilities greater than 0.5 are considered obstacles to avoid when the robot is navigating.
The path planning is to assume that the current position of the robot is (1.5 ), the target fire position is (22.5,27.5), the occupied grid map size is 50 multiplied by 50, the resolution is 10cm, and the actual environment size is 5m multiplied by 5m. The obstacles in the grid map are mainly distributed in the following areas (10, 20) - (15, 40), namely rectangular obstacles, (30, 5) - (35, 15), namely rectangular obstacles, and (20, 25) - (25, 40), namely L-shaped obstacles.
The path search is performed by using an algorithm A, wherein the search process is OpenSet { (1, 1) }, closeSet { }, the current node (1, 1), f=40.79, g=0, h=40.79, expansion (1, 1) → { (1, 2), (2, 2) }, adding OpenSet..the current node (22, 27), f=41.09, g=41.09, h=0, finding the target point, the search is completed, the total time is 35ms, the optimal path length is 41 steps, the corresponding actual path length is 41×0.1=4.1, and the optimal path obtained by the search A is (1, 1) - (2, 2) - (22, 26) - (22, 27).
The path optimization comprises the steps of connecting 41 path nodes obtained by searching A end to form a closed path, multiplying the coordinates of the path nodes by 0.1, converting the coordinates into a coordinate sequence in an actual environment, smoothing the path by utilizing a cubic Bezier curve, and setting control points as follows, P0 (0.1), P1 (1.2,0.2), P2 (2.0,1.8), P40 (2.2,2.6) and P41 (2.2,2.7). And generating 41 sections of Bezier curves, and splicing the sections into a smooth navigation path with the total length of about 45 m.
The smoothed path nodes are (0.1), (0.15,0.12), (0.22,0.18), and..the term, (2.18,2.68), and (2.20,2.70). The path execution comprises the steps of dispersing the smoothed path into 100 local target points, wherein the distance between each local target point is about 0.45m, and tracking and executing the local target points by the robot according to the wheel type odometer and the PID controller. Assuming that the robot linear velocity is 0.5m/s and the angular velocity is 0.8rad/s, the PID parameters are as follows, kp=1.5, ki=0.2, kd=0.3, the error statistics of local path tracking are as follows, the maximum lateral error is 0.15m, the maximum longitudinal error is 0.22m, the average lateral error is 0.08m, and the average longitudinal error is 0.12m. The robot successfully avoids the intermediate obstacle and safely reaches the target fire area within 90s (2.25,2.75).