Disclosure of Invention
In order to solve the technical problems in the prior art, the embodiment of the invention provides a digital construction method based on three-dimensional mapping of an unmanned aerial vehicle. The technical scheme is as follows:
In order to achieve the above purpose, the invention adopts the following technical scheme that the digital construction method based on unmanned aerial vehicle three-dimensional mapping comprises the following steps:
S1, importing a region digital earth surface model, dividing a construction region into a plurality of grids, identifying elevation peak points, calculating elevation offset of the peak points by combining elevation values of adjacent positions of the peak points, identifying a plurality of obstacles in the region, and generating obstacle identification information;
S2, calling the obstacle identification information, extracting the space position and the geometric shape of the obstacle, and calculating the bypass cost score by analyzing the path length, the flight height adjustment quantity and the path steering angle increased by bypass flight in each unmanned plane path to obtain the unmanned plane measurement path;
S3, acquiring a measuring path of the unmanned aerial vehicle, dynamically adjusting flight height control parameters of the unmanned aerial vehicle according to acquisition precision requirements and combining with real-time image definition, and acquiring image information and laser radar point cloud data of a construction area in real time to generate an area imaging data set;
S4, analyzing the image and the laser radar point cloud data according to the regional imaging data set by utilizing the flying height value, reconstructing a construction engineering model of the construction region, comparing the construction engineering model with a building information model of the target engineering, calculating the completion degree of the engineering in real time, identifying the engineering progress and generating construction progress information.
As a further scheme of the invention, the obstacle identification information is specifically an obstacle space distribution map layer, obstacle geometric attribute data and obstacle unique identification code, the unmanned aerial vehicle measurement path comprises a path section start-stop coordinate set, a path course angle sequence and a path corresponding obstacle avoidance strategy parameter set, the region imaging data set comprises an image resolution label, a point cloud coverage density map and an image positioning coordinate set, and the construction progress information is specifically a block completion state, a process node corresponding time identification and a construction completion proportion.
As a further scheme of the invention, a digital earth surface model of an area is imported, a construction area is divided into a plurality of grids, elevation peak points are identified, the elevation offset of the peak points is calculated by combining elevation values of adjacent positions of the peak points, a plurality of barriers in the area are identified, and the step of generating barrier identification information is specifically as follows:
S101, importing a regional digital earth surface model, dividing a construction region into a plurality of grids, extracting elevation data in each grid, identifying elevation peak points of each grid, recording position coordinates, and obtaining a peak point coordinate set;
S102, calculating elevation offset of a peak point in multiple directions based on the peak point coordinate set and combining elevation values of adjacent positions to obtain elevation offset data;
S103, detecting the abnormal position of the elevation in the area and identifying the obstacle, including a tower crane and an electric tower, by utilizing the elevation offset according to the elevation offset data, and generating obstacle identification information by extracting the size and position information of the target obstacle.
As a further scheme of the invention, the obstacle identification information is called, the spatial position and the geometric shape of the obstacle are extracted, the bypass cost score is calculated by analyzing the path length, the flight height adjustment and the path steering angle increased by bypass flight in each unmanned plane path, and the steps for acquiring the unmanned plane measurement path are specifically as follows:
s201, calling the obstacle identification information, extracting the space position coordinates, the vertical dimension and the boundary contour points of each obstacle, and identifying the position with space overlapping in each unmanned aerial vehicle path according to the space envelope range of each obstacle to obtain a path intersection segment data set;
S202, analyzing path lengths, flight height adjustment amounts and path steering angles required by the detour flight of a plurality of overlapped positions according to the path intersection segment data set, calculating detour cost scores of each path, and obtaining a path detour cost score set;
S203, calculating the priority of a plurality of unmanned aerial vehicle paths according to the path bypassing cost score set and the bypassing cost score and the predicted flight time, and obtaining unmanned aerial vehicle measurement paths.
As a further scheme of the present invention, the specific formula for calculating the bypass cost score of each path is as follows:
;
Calculating a comprehensive score of the path bypassing cost, and acquiring a path bypassing cost score set;
Wherein,Numbering the paths asThe path-bypassing cost composite score value of (c),Is a unique identification number for the path,For the index number of the intersecting segments in the path,Is a pathThe total number of intersecting segments in the middle,Is a pathMiddle (f)The bypass path length values of the individual intersecting segments,Is a pathMiddle (f)The required flying height adjustment of each crossing section,Is a pathMiddle (f)The actual steering angle values on the individual intersecting segments,To at the firstAll paths at each intersection correspond to the average of the steering angles,As the weighting coefficients of the path length factor,As a weight factor for the fly-height adjustment factor,Is a weight coefficient of the steering angle deviation factor.
As a further scheme of the invention, the unmanned aerial vehicle measuring path is obtained, the flying height control parameter of the unmanned aerial vehicle is dynamically adjusted according to the acquisition precision requirement and in combination with the real-time image definition, the image information of the construction area and the laser radar point cloud data are acquired in real time, and the step of generating the area imaging data set comprises the following steps:
S301, acquiring a measurement path of the unmanned aerial vehicle, calculating focal length parameters required by image acquisition equipment according to regional image acquisition precision requirements, and generating focal length flying height comparison data by combining flying height reference values required by corresponding focal lengths;
S302, according to the focal length flying height comparison data, the contrast value, the edge sharpening degree value and the target edge identifiable rate of the image are extracted in real time in the measuring process, and the definition of the image is evaluated, so that the imaging definition of the image is obtained;
S303, dynamically updating the flight height parameter according to the image imaging definition, recording an image acquisition time point and an image sequence number, and acquiring an area imaging data set by combining laser radar point cloud data.
As a further aspect of the present invention, the specific formula for evaluating the sharpness of the image is:
;
calculating the imaging definition of the image;
Wherein,Representing the definition of the image to be imaged,Representing the maximum gray value in the image,Representing the minimum gray value in the image,In order to prevent the constant of zero removal,As the number of edge pixels,Is the firstThe magnitude of the gradient of the individual edge pixels,For the average gradient magnitude of all edge pixels,For the number of regions in the image,Represents the firstThe target boundary recognition rate of the individual regions,For an average target boundary recognition rate for all regions,Representing the index of edge pixels in the image,Representing an index of the region in the image.
According to the further scheme of the invention, according to the regional imaging data set, the image and the laser radar point cloud data are analyzed by utilizing the flying height value, the construction engineering model of the construction region is reconstructed, the construction engineering model is compared with the construction information model of the target engineering, the completion degree of the engineering is calculated in real time, the engineering progress is identified, and the construction progress information is generated by the steps of:
S401, extracting a flight height value and a time mark corresponding to each frame in image information according to the regional imaging data set, and carrying out space coordinate registration on image data and point cloud data by combining space coordinate information of each measuring point in laser radar point cloud data to generate a space data alignment result;
s402, reconstructing a building model in a target construction engineering area based on the spatial data alignment result, and extracting geometric dimension information of the model to obtain a construction engineering model;
S403, according to the construction engineering model, calling a building information model of the target engineering and comparing the building information model with an actual construction model, calculating the engineering completion degree in a plurality of grids, identifying the engineering progress and calculating the progress deviation difference value, and obtaining the construction progress information.
As a further aspect of the present invention, the method further includes:
S5, calling the construction progress information, extracting the geometric center point coordinates, the space extending directions and the boundary sizes of a plurality of components in a construction engineering model, combining the design position parameters and the direction vectors of the corresponding components in a building information model, and calculating the space position deviation degree of each component by comparing the space position difference among the geometric center points, the deviation angles among the direction vectors and the coordinate distribution difference among the boundary sizes to obtain a construction engineering measurement record;
the construction engineering measurement record comprises a component actual space position set, a component offset angle matrix and a component size deviation value distribution map.
The method comprises the steps of calling the construction progress information, extracting the coordinates, the spatial extension directions and the boundary sizes of geometric center points of a plurality of components in a construction engineering model, combining design position parameters and direction vectors of corresponding components in a building information model, and calculating the spatial position deviation degree of each component by comparing the spatial position difference among the geometric center points, the deviation angle among the direction vectors and the coordinate distribution difference among the boundary sizes, wherein the steps of obtaining the construction engineering measurement record are as follows:
S501, calling the construction progress information, extracting three-dimensional boundary data of components in the construction engineering model in each grid, and generating a component space attribute data set by extracting geometric center point coordinates, space extending direction vectors and boundary dimension values of each component;
s502, according to the component space attribute data set, extracting and comparing design coordinates, design direction vectors and standard boundary dimension data of corresponding components in the building information model to obtain component space difference data;
S503, analyzing the space position deviation degree of each component according to the component space difference data, calculating the deviation grade and mapping the deviation grade into a construction engineering model, and obtaining a construction engineering measurement record.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
Through regional obstacle recognition, the calculation of the detour cost scores of a plurality of measurement paths is combined, the safety and rationality of route planning are optimized, the dynamic control of imaging quality is realized by utilizing a fly height and image definition linkage adjustment mode, the components of a construction engineering model are constructed, the recognition of construction progress and the detection of size deviation are realized, the progress judgment accuracy and the space monitoring capability of digital construction are effectively improved, a data basis is provided for engineering management decision-making, and the construction quality is improved.
Detailed Description
The technical scheme of the invention is described below with reference to the accompanying drawings.
In embodiments of the invention, words such as "exemplary," "such as" and the like are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the term use of an example is intended to present concepts in a concrete fashion. Furthermore, in embodiments of the present invention, the meaning of "and/or" may be that of both, or may be that of either, optionally one of both.
In the embodiments of the present invention, "image" and "picture" may be sometimes used in combination, and it should be noted that the meaning of the expression is consistent when the distinction is not emphasized. "of", "corresponding (corresponding, relevant)" and "corresponding (corresponding)" are sometimes used in combination, and it should be noted that the meaning of the expression is consistent when the distinction is not emphasized.
In embodiments of the present invention, sometimes a subscript such as W1 may be written in a non-subscript form such as W1, and the meaning of the expression is consistent when de-emphasizing the distinction.
In order to make the technical problems, technical solutions and advantages to be solved more apparent, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1, the invention provides a technical scheme, namely a digital construction method based on three-dimensional mapping of an unmanned aerial vehicle, which comprises the following steps:
S1, importing a region digital earth surface model, dividing a construction region into a plurality of grids, identifying elevation peak points, calculating elevation offset of the peak points by combining elevation values of adjacent positions of the peak points, identifying a plurality of obstacles in the region, and generating obstacle identification information;
S2, calling obstacle identification information, extracting the space position and the geometric shape of an obstacle, and calculating a bypass cost score by analyzing the path length, the flight height adjustment quantity and the path steering angle increased by bypass flight in each unmanned plane path to obtain an unmanned plane measurement path;
S3, acquiring a measurement path of the unmanned aerial vehicle, dynamically adjusting flight height control parameters of the unmanned aerial vehicle according to the acquisition precision requirement and combining with the definition of a real-time image, and acquiring image information and laser radar point cloud data of a construction area in real time to generate an area imaging data set;
S4, analyzing the image and the laser radar point cloud data by utilizing the flying height value according to the regional imaging data set, reconstructing a construction engineering model of the construction region, comparing the construction engineering model with a building information model of the target engineering, calculating the completion degree of the engineering in real time, identifying the engineering progress and generating construction progress information;
And (3) calling construction progress information, extracting geometric center point coordinates, space extending directions and boundary sizes of a plurality of components in a construction engineering model, combining design position parameters and direction vectors of corresponding components in a building information model, and calculating the space position deviation degree of each component by comparing the space position difference among the geometric center points, the deviation angle among the direction vectors and the coordinate distribution difference among the boundary sizes to obtain a construction engineering measurement record.
The obstacle identification information comprises an obstacle space distribution map layer, obstacle geometric attribute data and obstacle unique identification codes, the unmanned aerial vehicle measurement path comprises a path section start-stop coordinate set, a path course angle sequence and a path corresponding obstacle avoidance strategy parameter set, the region imaging data set comprises an image resolution label, a point cloud coverage density map and an image positioning coordinate set, the construction progress information comprises a block completion state, a process node corresponding time identification and a construction completion proportion, and the construction engineering measurement record comprises a component actual space position set, a component offset angle matrix and a component size deviation value distribution map.
The method comprises the steps of importing a region digital earth surface model, dividing a construction region into a plurality of grids, identifying elevation peak points, calculating elevation offset of the peak points by combining elevation values of adjacent positions of the peak points, identifying a plurality of barriers in the region, and generating barrier identification information, wherein the steps are as follows:
S101, importing a regional digital earth surface model, dividing a construction region into a plurality of grids, extracting elevation data in each grid, identifying elevation peak points of each grid, recording position coordinates, and obtaining a peak point coordinate set;
After the digital surface model of the area is imported, dividing the two-dimensional range of the construction area into 100×100 standard grids according to the rule of equal side length, setting the side length of each grid unit to be 5 meters, numbering the grids in sequence after the grid division is finished, executing elevation data extraction operation on each grid after numbering, wherein the extraction mode is that all elevation point data falling into the range in the digital surface model are screened in the grid boundary range, the space three-dimensional coordinate value of each elevation point is recorded, then all elevations Cheng Dian in each grid are subjected to elevation value sorting, the coordinate point corresponding to the maximum elevation value is selected as the elevation peak point of the current grid, the space coordinates of the point are recorded to the peak point coordinate set, the judgment rule of the maximum value in the sorting is the maximum value in all elevation values in the current grid, the difference between the two height values is more than 1.2 m and the second height value in the same grid, the threshold value is set by the height of the minimum identifiable artificial structure in the engineering field, in order to avoid the low fluctuation disturbance of the error identification, if the difference is less than 1.2 m, the current grid does not record the peak value, through a simulation experiment, if the maximum value of the height is 47.2 m and the second height value is 46.3 m, the difference is 0.9 m, the grid does not record the peak value, if the maximum value is 52.5 m, the next highest value is 50.8 m, the difference is 1.7 m and the threshold value is exceeded, the point is the effective peak value, the operation of extracting the peak value is completed by continuously traversing all grids, the peak point coordinate set with continuous space distribution is obtained, if the total grid number is 20000, the peak point is finally obtained, the peak point coordinate set is 1275, and the constructed peak point coordinate set is formed by the X coordinate, Y coordinate and Z value of each point in the form of three-dimensional coordinate arrayWhereinThe peak point number is indicated and the peak value,Is a longitude coordinate of which the position is a longitude coordinate,Is a coordinate of the latitude of the user,And taking the array as an input basis for calculating the next step of elevation offset to acquire a peak point coordinate set.
S102, calculating elevation offset of a peak point in multiple directions based on a peak point coordinate set and combining elevation values of adjacent positions to obtain elevation offset data;
Performing elevation offset calculation operation on each peak point based on the extracted peak point coordinate set, wherein each peak point is used as a center point in the operation, searching the center elevation points of adjacent grids in the peripheral 8 directions, performing difference processing on the elevation values of the adjacent grids in each direction and the elevation values of the center peak points, and calculating the difference by adopting a formulaWhereinAs the elevation value of the current peak point,Is the firstThe elevation values of the central points of adjacent grids in the direction,The directions are arranged according to main azimuth angles of east, southeast, south, southwest, west, northwest, north and northeast, 8 direction difference values are calculated, an elevation offset value set of the peak point is recorded, the elevation offset value set reflects fluctuation degree of the peak point in each direction and can be used for subsequent abnormal area identification and structure protrusion structure judgment, if the difference value of one direction is greater than 2.5 meters, the direction is marked as a strong protrusion direction, the threshold value is derived from the average vertical elevation characteristic of an artificial structure, through field investigation, the average elevation of a main structure of the tower crane from a substrate to a rotating platform is 3.2 meters, the tolerance lower limit is set to be 2.5 meters in consideration of DSM precision tolerance, if the elevation of the peak point is 51.6 meters and the elevation of an east side adjacent lattice is 48.5 meters, the elevation of the peak point is 51.6 metersIf the directions are all lower than 2.5 m, recording that the point has protruding characteristics only in the eastern direction, and finally forming a matrix by 8-direction offset values of all peak points, wherein the matrix row corresponds to the peak point number, and the column corresponds to the direction number, and is recorded asWhereinThe point of the peak is indicated and,And indicating the direction number, and generating elevation offset data after all calculation is completed.
S103, detecting an elevation abnormal position in the area and identifying an obstacle, including a tower crane and an electric tower, by utilizing the elevation offset according to the elevation offset data, and generating obstacle identification information by extracting the size and position information of a target obstacle;
According to elevation offset data, traversing the direction offset matrix of all peak points in sequence, extracting points with elevation difference exceeding 2.5 m in any continuous 3 directions, marking the points as elevation abnormal region candidate points, extracting the number of the rest peak points in a 5X 5 grid range around the candidate points, judging as a concentrated elevation difference fluctuation region if the number exceeds 5, entering an obstacle judging process, further analyzing the distribution density of the points with the height Cheng Fengzhi in the region and the local maximum elevation value, if the local maximum elevation exceeds 3 times of standard deviation of average elevation in all surrounding adjacent grids, and the peripheral discontinuous fluctuation direction number is smaller than 3, identifying the region as a stable structure obstacle candidate region, executing structure identification operation in the region, extracting the vertical span value and the horizontal projection area of all abnormal points, comparing the structure form of the region with the span value exceeding 10m and the area between 4 square meters and 36 square meters, matching the outer wheel standard size of a target structure such as a crane, an electric tower, evaluating the like according to the coincidence degree of the size and the fitting boundary degree, evaluating the structure type, and establishing the corresponding space coordinate information, and recording the space coordinate information, and completing the establishment of the space coordinate information, and the corresponding space coordinate record of the space coordinate information, and the completion of the obstacle type.
Invoking obstacle identification information, extracting the spatial position and the geometric shape of the obstacle, calculating the bypass cost score by analyzing the path length, the flight height adjustment quantity and the path steering angle increased by bypass flight in each unmanned aerial vehicle path, and acquiring the unmanned aerial vehicle measurement path specifically comprises the following steps:
S201, calling barrier identification information, extracting space position coordinates, vertical dimensions and boundary contour points of each barrier, and identifying the position where space overlap exists in each unmanned aerial vehicle path according to the space envelope range of each barrier to obtain a path intersection segment data set;
the method comprises the steps of firstly, extracting spatial position coordinate information of each obstacle, including longitude and latitude coordinates of a central point and corresponding elevation values, extracting vertical dimension data of the obstacle in a three-dimensional space, recording continuous elevation difference of the obstacle from a base to the top, carrying out distributed sampling on boundary contour points of each obstacle, forming a closed boundary set by projection boundaries of the boundary contour points under a horizontal coordinate system, constructing a three-dimensional enveloping body according to the spatial enveloping range of each obstacle on the basis, calibrating the space occupied volume of the three-dimensional enveloping body, acquiring a planned unmanned aerial vehicle measuring path data set, extracting a navigation section coordinate point sequence of each path, carrying out linear connection on all continuous points in the navigation sections of the path, constructing a flight path line segment set, carrying out spatial overlapping judgment operation between the enveloping body and the path line segment set, judging that an intersection point exists between the path line segment and the obstacle enveloping surface in any direction, regarding the intersection point as a spatial intersection section in an area which is more than 1 meter below the top of the obstacle, regarding the intersection point as a spatial intersection section, sequentially meeting overlapping conditions, recording the serial number of the intersection point and the intersection point, and the intersection point is a starting point, and the intersection point of the intersection point, and the number of the intersection point are calculated as a path, and the data set is a data set, and the number of the intersection point and the intersection point is a path is formed, and the intersection point is the number, and the intersection point is formed, and the data set is calculated, and the number is calculated.
S202, analyzing path lengths, flight height adjustment amounts and path steering angles required by the detour flight of a plurality of overlapped positions according to the path intersection segment data set, calculating detour cost scores of each path, and obtaining a path detour cost score set;
the specific formula for calculating the bypass cost score of each path is as follows:
;
Calculating a comprehensive score of the path bypassing cost, and acquiring a path bypassing cost score set;
Wherein,Numbering the paths asThe path-bypassing cost composite score value of (c),Is a unique identification number for the path,For the index number of the intersecting segments in the path,Is a pathThe total number of intersecting segments in the middle,Is a pathMiddle (f)The bypass path length values of the individual intersecting segments,Is a pathMiddle (f)The required flying height adjustment of each crossing section,Is a pathMiddle (f)The actual steering angle values on the individual intersecting segments,To at the firstAll paths at each intersection correspond to the average of the steering angles,As the weighting coefficients of the path length factor,As a weight factor for the fly-height adjustment factor,Is a weight coefficient of the steering angle deviation factor.
The formula:
Formula details and formula calculation derivation process:
the formula is used to calculate the path number asThe comprehensive scores of the bypass costs of the paths on all the cross sections are used for evaluating and sequencing basis of optimal scheduling of the unmanned aerial vehicle paths.
Parameter meaning and setting value:
the current path is numbered and set as a path 05;
For the number of detected intersecting sections in the path 05, the spatial registration analysis result is 3 sections determined by the overlapping relation of the path and the obstacle spatial envelope bodyNumbering from 1 to 3 for cross section index;
Is the first in path 05Setting the length of the detour path of the segmentIs 28.3 m of the total length of the water,Is the length of the water-soluble fiber is 32.7 m,25.1 Meters;
Is path 05 at the firstSetting the flight height adjustment of the segmentIs 6.2 m of the total length of the water,Is 4.7 m in length and the diameter of the water,5.9 Meters;
Is path 05 at the firstSetting the steering angle of the segmentAt 34,At 48,27 °;
At the intersection for all pathsAverage value of steering angle, setting,,;
、、The weight coefficients of the length, the flying height and the angle deviation are set as,,;
Substituting the parameters into a formula to calculate:
Segment 1 cost score:
;
segment 2 cost score:
;
segment 3 cost score:
;
And (3) summing:
;
The result 50.29 shows that the path 05 comprehensively considers the detour length, the flying height adjustment quantity and the detour cost fraction after the angle deviation in all the intersecting sections, and the numerical value is a key index used for sequencing in the follow-up path optimal scheduling.
S203, calculating priorities of a plurality of unmanned aerial vehicle paths according to the path bypassing cost score set and the bypassing cost score and the predicted flight time, and acquiring unmanned aerial vehicle measurement paths;
Extracting cost scoring results of all paths according to the path bypassing cost score set, and calculating the flight duration by combining the flight distance and the set flight speed of each path, wherein the flight duration calculation mode is to divide the total length of the paths by the flight speed, the units are unified to be seconds, the flight speed of the unmanned aerial vehicle is set to be 8 meters per second, if the path length is 620 meters, the flight duration is 77.5 seconds, the cost score and the flight duration of each path are normalized, and the normalization value adopts an interval mapping formula: WhereinAs the original value of the value,Respectively setting the minimum value and the maximum value in all paths, and respectively setting the return values of the bypass cost and the flight time length asAndFinally, the two are weighted and combined to obtain a path priority index, the weights are respectively 0.6 and 0.4, if a certain path,Then the priority value isAnd (3) arranging all paths according to the ascending order of the priority values, extracting a plurality of path numbers before, recording the navigation point index sequences of the path numbers, combining the path numbers to form a final unmanned aerial vehicle measuring path, and obtaining the unmanned aerial vehicle measuring path.
Acquiring an unmanned aerial vehicle measuring path, dynamically adjusting flight height control parameters of the unmanned aerial vehicle according to acquisition precision requirements and combining with real-time image definition, acquiring image information of a construction area and laser radar point cloud data in real time, and generating an area imaging data set specifically comprises the following steps:
s301, acquiring an unmanned aerial vehicle measurement path, calculating focal length parameters required by image acquisition equipment according to regional image acquisition precision requirements, and generating focal length flying height comparison data by combining flying height reference values required by corresponding focal lengths;
Acquiring unmanned aerial vehicle measurement paths, firstly extracting a navigation segment coordinate sequence and the corresponding navigation point number in each unmanned aerial vehicle path according to regional image acquisition precision requirements, calculating the horizontal distance between the navigation points for each navigation segment, and combining the sensor size of imaging equipmentPixel sizeVertical distance between flight platform and groundAccording to a standard spatial resolution formula: Back-pushing focal length parameterWhereinFor ground resolution, in centimeters per pixel,The focal length of the lens is measured in millimeters, the target resolution is set to be 3cm/pixel, the pixel size is 4.2 mu m, the initial value of the flying height is 90 meters, and the focal length is calculated by substitutionMillimeter, in order to ensure imaging definition, taking multiple groups of flying height values at different heights, substituting the flying height values into the formula respectively to obtain the corresponding required focal length values, constructing a height-focal length mapping table, and sorting each group of flying heights recorded in the table and the corresponding focal length values into data pairsAnd the focal lengths are calculated to be 1.05mm, 1.2mm, 1.35mm, 1.5mm and 1.65mm respectively, and are correspondingly recorded as a group of comparison items to obtain focal length flight height comparison data for a typical operation area, wherein the 5 common heights are 70 meters, 80 meters, 90 meters, 100 meters and 110 meters respectively.
S302, according to the focal length flying height comparison data, the definition of the image is evaluated by extracting the contrast value, the edge sharpening degree value and the target edge identifiable rate of the image in real time in the measuring process, so as to obtain the imaging definition of the image;
The specific formula for evaluating the sharpness of an image is:
;
calculating the imaging definition of the image;
Wherein,Representing the definition of the image to be imaged,Representing the maximum gray value in the image,Representing the minimum gray value in the image,In order to prevent the constant of zero removal,As the number of edge pixels,Is the firstThe magnitude of the gradient of the individual edge pixels,For the average gradient magnitude of all edge pixels,For the number of regions in the image,Represents the firstThe target boundary recognition rate of the individual regions,For an average target boundary recognition rate for all regions,Representing the index of edge pixels in the image,Representing an index of the region in the image.
The formula:
;
Formula details and formula calculation derivation process:
the formula is used to calculate the imaging sharpness of the imageThe result is used to evaluate the quality of the image. The formula includes three parts, namely a gray contrast part, an edge sharpening part and an edge recognizability part.
Parameter meaning and setting value:
setting the maximum gray value of the image as 200, reflecting the maximum value of the brightness of the image;
setting the minimum gray value of the image as 50, and reflecting the minimum value of the brightness of the image;
To prevent zero removal, a constant of 0.01 is set for preventing zero removal in computation;
for the number of edge pixels, set to 5000,Is the firstThe gradient magnitude of the individual edge pixels, representing the intensity of the edge portion in the image,For the average gradient magnitude of all edge pixels, set to 110, representing the average intensity of all edge pixels, set to;
For the number of areas in the image, set to 10, means that the image is divided into 10 areas,Is the firstThe target boundary recognition rate of an area, which represents the proportion of the boundary of the area that is correctly recognized,The average boundary recognition rate for all regions was set to 0.80, which represents the average boundary recognition rate for all regions, and was set。
Substituting the parameters into a formula to calculate:
substituting the parameters into a formula to calculate the definition of the image:
;
;
resultsThe method shows the comprehensive definition of the image, has higher contrast of the image, good edge sharpening effect and higher target boundary recognition rate, shows better image quality, and is suitable for further analysis and modeling.
S303, dynamically updating flight height parameters according to the imaging definition of the images, recording image acquisition time points and image sequence numbers, and acquiring an area imaging data set by combining laser radar point cloud data;
According to the imaging definition data of the images, dynamically comparing the evaluation result of each image, recording the corresponding flying heights of the images which do not pass through the definition evaluation into a regulating and controlling queue, carrying out height correction on the flying segments of the images, setting a correction value according to a definition index missing item, setting the height regulating proportion as 5% steps of adjacent height groups in a focal length comparison table, and if the current image shooting height is 90 meters and the identification rate index is lower than a set threshold value, correcting the flying heights as followsAnd (3) marking corresponding flight time points and image number information, recording the point cloud data frame numbers of the laser radar system at corresponding time, carrying out synchronous processing on the image acquisition and the laser point cloud by taking a timestamp as a main key, combining and constructing region information records comprising the image frame numbers, shooting time, flight height and the point cloud frame numbers, summarizing synchronous sequences of images and point clouds in all operation sections, generating a time sequence data set of the whole quantity of the region, and acquiring a region imaging data set.
According to the regional imaging data set, analyzing the image and the laser radar point cloud data by utilizing the flying height value, reconstructing a construction engineering model of a construction region, comparing the construction engineering model with a building information model of a target engineering, calculating the completion degree of the engineering in real time, identifying the engineering progress, and generating construction progress information specifically comprises the following steps:
S401, extracting a flight height value and a time mark corresponding to each frame in image information according to a regional imaging data set, and carrying out space coordinate registration on image data and point cloud data by combining space coordinate information of each measuring point in laser radar point cloud data to generate a space data alignment result;
According to the regional imaging data set, firstly extracting flight height data corresponding to each frame of image, recording acquisition time mark information of each frame of image, converting pixel coordinates of each frame of image into ground coordinates, and calling flight height values in the conversion processAnd focal lengthUsing internal orientation formulas、WhereinIn practical cases, if the pixel coordinate of a point in the image is (1200,1500), the focal length is the pixel coordinate of a point in the image coordinate systemMm, flying heightThe corresponding ground coordinates aremm、Mm, namely (80 m, 100 m), extracting the space coordinates of each measuring point from the laser radar point cloud data, including the longitude, latitude and elevation coordinates of each measuring point in the point cloud data, and recording asThe ground coordinates obtained by converting the image coordinates are utilized to perform the space position alignment operation with the laser point cloud measuring points, specifically, the point coordinates obtained by converting the image coordinates are compared with the point cloud data measuring point coordinates, and the minimum distance matching principle is adopted, namely, the calculated Euclidean distance value is adoptedIf the distance is smaller than the set spatial registration threshold value by 2 meters, the distance is regarded as a successful registration point, the pixel coordinates of all image frames and the point cloud point positions are sequentially traversed to carry out registration matching operation, and the calculated distance is that in the practical calculation example, the point cloud point coordinates (81 meters, 99 meters and 90.5 meters) are calculatedAnd (3) recording image serial numbers of each group of successful alignment points and corresponding point cloud measurement point indexes after finishing the registration of all images and point clouds, and obtaining a spatial data alignment result, wherein the m is smaller than a threshold value of 2 m.
S402, reconstructing a building model in a target construction engineering area based on a spatial data alignment result, and extracting geometric dimension information of the model to obtain a construction engineering model;
Based on a space data alignment result, unifying successfully registered point cloud data point sets to the same space coordinate system, calling all point cloud data, calculating a surface coordinate set of an area where each building target is located, three-dimensionally reconstructing the shape of the outer surface of the target building through the point cloud data, acquiring the geometric dimensions of building components by adopting a space coordinate solving method through extracting three-dimensional boundary points of each part of the building, wherein the geometric dimensions comprise length, width and height, specifically, the search matching operation is carried out on the furthest point pairs of the boundary point sets of each component along the three-axis direction, in the search process, difference operation is carried out between the maximum value and the minimum value of X, Y, Z coordinates of the boundary points of the components, for example, the maximum value and the minimum value of the X coordinates of the point cloud of a certain construction area are 115 meters, the length dimension of the component in the X direction is 15 meters, the Y direction and the Z direction dimension of the component are determined in the same manner, the three-dimensional geometric dimensions of the component are acquired to be 15 meters, the width is 12 meters and the height is 8 meters, the dimensional geometric dimensions of the component in the three-dimensional dimensions are sequentially measured, the dimensional dimensions of the component in the area are recorded, the dimensional dimensions and the space dimension and the geometric dimension parameters of the construction area are acquired, and the whole construction area is completely summarized, and the construction area is constructed.
S403, according to the construction engineering model, calling a building information model of the target engineering and comparing the building information model with an actual construction model, calculating the engineering completion degree in a plurality of grids, identifying the engineering progress and calculating the progress deviation difference value, and obtaining the construction progress information;
According to the construction engineering model, extracting the space position and the corresponding geometric dimension data of each component in the model, calling the designed target engineering building information model, comparing the actual construction model with the design model according to the space coordinates, setting the grid cell size to be 5m x 5m, and using the actual construction volume of the construction model in each grid cellAnd the target volume of the design modelComparing, calculating the completion degree of engineering asFor example, if the volume of a design model of a certain grid unit is 200 cubic meters and the actual measurement volume of a construction model is 160 cubic meters, the completion of the grid unit isIf the completion degree is smaller than 90% of the preset completion degree reference value, judging as a slower unit of the construction progress, independently recording and marking all the slower grid units, and calculating the difference between the actual completion degree and the design completion degree in all the grid units, namely calculating the progress deviation difference asThe grid unit difference isAnd c, namely, setting the deviation difference value to be negative, indicating that the construction progress is lagged by 40 cubic meters, summarizing the completion degree and the deviation difference value of all grid units, establishing a complete construction area progress information set, and acquiring construction progress information.
Invoking construction progress information, extracting geometric center point coordinates, space extending directions and boundary sizes of a plurality of components in a construction engineering model, combining design position parameters and direction vectors of corresponding components in a building information model, and calculating the space position deviation degree of each component by comparing the space position difference among the geometric center points, the deviation angle among the direction vectors and the coordinate distribution difference among the boundary sizes, wherein the step of obtaining the construction engineering measurement record comprises the following specific steps:
S501, calling construction progress information, extracting three-dimensional boundary data of components in each intra-grid construction engineering model, and generating a component space attribute data set by extracting geometric center point coordinates, space extending direction vectors and boundary dimension values of each component;
Calling construction progress information, sequentially reading the component units which are subjected to construction in each space grid, extracting point cloud boundary point sets formed in a three-dimensional model, respectively extracting furthest-distance point pairs from each component boundary point set along X, Y, Z directions, performing difference operation on coordinates to determine boundary dimension values of components in the three-axis direction, setting the maximum value of a certain component in the X-axis direction to be 102.4 meters and the minimum value of the certain component in the X-axis direction to be 98.6 meters, and determining the X-axis dimension to beThe arithmetic mean value of all the boundary point coordinates is calculated from the boundary point set to obtain the geometric center point coordinates, for example, if the boundary point X coordinates of a certain component are 98.6, 99.1, 100.4 and 102.4 in sequence, the center point X coordinates areFurther, according to the connection relation of each boundary point in space, the maximum direction projection vector direction is obtained by adopting a principal component analysis method, the extension vector is set as (0.87,0.49,0) as a space extension direction vector, the extension vector represents that the component extends along the far and the north directions of the XY plane, after the calculation of the geometric center point coordinates, the space extension direction vector and the boundary dimension of all the components is completed, structural summarization is carried out according to the component numbers, a structural data set taking the component numbers as the principal keys is formed, and a component space attribute data set is generated.
S502, according to a component space attribute data set, extracting design coordinates, design direction vectors and standard boundary dimension data of corresponding components in a building information model, and comparing to obtain component space difference data;
according to the component space attribute data set, reading design component parameters corresponding to the component space attribute data set in the building information model one by one, comparing the component design coordinates with the same number with the actual center point coordinates, and executing three-dimensional space difference value calculation, wherein a position difference value calculation formula is as follows: WhereinThe coordinates of the center points of the components in the actual model,For the coordinates of the center point of the component in the design model, if the actual component center point is (100.1,36.7,4.8), the design center point is (100.0,36.6,4.9), the position difference isAnd (3) judging the space direction difference, if the included angle is larger than 15 degrees, regarding the space direction difference as direction deviation, in addition, comparing the length-width-height difference of the boundary dimension, if the single-axis dimension difference exceeds 10% of the design dimension, marking the single-axis dimension difference as a dimension abnormal member, finishing the comparison of the positions, the directions and the boundary dimensions of all the members, outputting the difference value record of each member in three dimensions, and obtaining the space difference data of the members.
S503, analyzing the space position deviation degree of each component according to the component space difference data, calculating the deviation grade and mapping the deviation grade into a construction engineering model to obtain a construction engineering measurement record;
According to the space difference data of the components, extracting the difference value fields of each component in three dimensions of position, direction and boundary dimension, setting an offset grade dividing threshold, setting the position offset grade to be 0-0.1 meter as a first level, 0.1-0.3 meter as a second level and more than 0.3 meter as a third level, dividing the direction offset grade to be 0-5 degrees as a first level, 5-15 degrees as a second level and more than 15 degrees as a third level, setting the size offset grade to be 0-5% as a first level, 5% -10% as a second level and more than 10% as a third level relative to the error, respectively giving grade labels according to the section where the difference values are located, carrying out synthetic judgment on the grade values of each component on three indexes, adopting the maximum offset grade as the whole offset grade of the component, setting the direction offset grade to be 17 degrees as a third level if the position offset of a certain component is 0.28 meter as a second level and the size offset grade is 4% as a first level, marking the corresponding space coordinate position and the component number in a construction model, and generating an offset grade measuring and recording layer of the whole area engineering measurement grade.
The above embodiments may be implemented in whole or in part by software, hardware (e.g., circuitry), firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions in accordance with embodiments of the present invention are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired (e.g., infrared, wireless, microwave, etc.) means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B, and may mean that a exists alone, while a and B exist alone, and B exists alone, wherein a and B may be singular or plural. In addition, the character "/" herein generally indicates that the associated object is an "or" relationship, but may also indicate an "and/or" relationship, and may be understood by referring to the context.
In the present invention, "at least one" means one or more, and "a plurality" means two or more. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (a, b, or c) of a, b, c, a-b, a-c, b-c, or a-b-c may be represented, wherein a, b, c may be single or plural.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus, device and unit described above may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another device, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.