Disclosure of Invention
The invention aims to provide an unmanned aerial vehicle proximate image acquisition method based on incremental reconstruction, which comprises the following steps of:
1) Acquiring oblique photographic images and/or multi-view images of a target structure by using an unmanned aerial vehicle, and reconstructing the acquired images into a low-precision image point cloud;
2) Acquiring a three-dimensional grid model of a target structure, and dispersing the three-dimensional grid model into a point cloud of the target structure;
3) Matching the target structure point cloud with the low-precision image point cloud, recording a coordinate transformation matrix M in the matching process, taking the matched target structure point cloud as a target point cloud, and taking the low-precision point cloud with the distance from the target point cloud being larger than dmin as an obstacle point cloud;
4) Respectively constructing a target grid and an obstacle grid based on the target point cloud and the obstacle point cloud;
5) Acquiring a flight space and an image acquisition candidate point set;
6) Establishing a camera main view model at each image acquisition candidate point;
7) Creating a light projection scene according to the target grids, the obstacle grids and the camera models, and calculating the visual information of the target grids under each camera model, wherein the visual information comprises indexes of each visual target grid and the distance between each visual target grid and an unmanned aerial vehicle image acquisition candidate point;
8) Based on the visual information of the target grids, dividing the visual target grids under each camera model into class I grids and class II grids, and screening unmanned aerial vehicle image acquisition candidate points with the number of class I grids being greater than N to be used as an effective visual point set;
9) Acquiring a working viewpoint from the effective viewpoint set;
10 Generating a flight path of the unmanned aerial vehicle in the flight space based on the working viewpoint coordinates;
11 Based on the unmanned aerial vehicle flight path and the camera external parameters of the working viewpoint, calculating the flight parameters of the unmanned aerial vehicle at the working viewpoint, and generating a flight file.
Further, in step 4), the target point cloud is encapsulated into a target grid, or the three-dimensional grid of the target structure transformed by the coordinate transformation matrix M is encapsulated into the target grid.
Further, in step 5), the step of acquiring the flying space and the image acquisition candidate point set includes:
5.1 Establishing a target grid semi-directional bounding box, expanding the bounding box to the side and the upper side, wherein the expansion size is d= (P/lambda)2pfeq/S35, the side length of the expanded bounding box is upwards taken to be an integral multiple of the hovering precision of the unmanned aerial vehicle, P is a desired point cloud point position tolerance, lambda epsilon [5,10] is a conversion coefficient of pixel precision and point cloud tolerance, P is pixel precision, feq is an equivalent focal length of a camera of 35mm, and S35 is a negative film size of 35 mm;
5.2 Dividing the expansion bounding box into voxels according to a second rule by taking the hovering precision of the k times unmanned aerial vehicle as a distance, and calculating the center point coordinate of each voxel in the voxels to be used as a point set A, wherein k is a natural number not less than 1;
5.3 Dividing the point set A into a point set B if a voxel represented by a point in the point set A contains a target point cloud, dividing the point into a point set C if the voxel contains an obstacle point cloud, and dividing the point into a point set D if the voxel does not contain the target point cloud and the obstacle point cloud;
5.4 According to the third rule, removing the point with the nearest point distance smaller than the safety distance Dsafe from the point set D to the point set B and the point set C to obtain a point set E;
5.5 Removing points right below the point set B and the point set C from the point set E to obtain a point set F;
5.6 Selecting the point cluster with the largest quantity from the point set F as a point set G, wherein the voxel space represented by the point set G is the unmanned plane flight space;
5.7 Removing points with the distance smaller than d from the closest point of the target point cloud from the point set G to obtain a point set H;
5.8 With each point in the point set H as a starting point, pointing to the nearest point in the target point cloud, constructing a sight line, calculating a sight elevation angle, and eliminating points with the sight elevation angle exceeding the maximum elevation angle of the unmanned aerial vehicle holder from the point set H to obtain an unmanned aerial vehicle image acquisition candidate point set.
Further, in step 5.1), the height direction of the target grid semi-directional bounding box is aligned with the positive direction of the Z axis, and the lateral surfaces of the bounding box are all perpendicular to the XY plane.
Further, the safety distance dsafe in step 5.4) is not less than the sum of the unmanned aerial vehicle hovering precision, the estimated precision of the low-precision point cloud and the minimum obstacle avoidance distance of the unmanned aerial vehicle.
Further, in step 6), when each image acquisition candidate point builds a camera main view model, the camera external parameter Pc is a matrix of 4 rows and 4 columns, the first 3 columns of the camera external parameter Pc are orthogonal rotation matrices r= [ η1,η2,η3 ], the last three rows and the last column are offset column vectors T, and the fourth row [0, 1];
η1,η2,η3 is a unit row vector, η3 is a unit direction vector of points in the image acquisition candidate point set pointing to the nearest point of the points in the target point cloud, and η1 is perpendicular to a Z axis and η3,η2 and perpendicular to η1 and η3 at the same time;
The camera internal parameters are obtained through manufacturer description, calibration plate calibration or during low-precision point cloud reconstruction.
Further, in step 8), the class I mesh is a mesh having a distance between the visual target mesh and the unmanned aerial vehicle image acquisition candidate point within a threshold d, and the class II mesh is a mesh having a distance between the visual target mesh and the unmanned aerial vehicle image acquisition candidate point outside a threshold d= (P/λ)2pfeq/S35.
Further, in step 9), the step of acquiring the working viewpoint from the set of valid viewpoints includes:
9.1 Initializing the coverage rate of the grid to be 0, and initializing the observed grid set to be an empty set;
9.2 A grid visual order table is calculated, wherein the table comprises grid indexes and the number of times that each grid is respectively used as a class I grid and a class II grid to be observed by an effective viewpointAndAnd go to step 9.3), i is the grid number, if the number isLess than 3, the grid is considered to be insufficiently visible, and before step 9.3) is performedAndAssigning a value of 0, and modifying the visible information of the grid in all the effective viewpoints to be invisible;
9.3 Calculating weights of the effective viewsA is a view point number, a is a class I grid visibility identifier, a is 1 when grid I is seen by an effective view point j as a class I grid, otherwise a is 0;b is a class II grid visibility identifier, b is 1 when grid I is seen by an effective view point j as a class II grid, and otherwise b is 0;
9.4 Taking the viewpoint with the largest weight as a newly added working viewpoint, adding a working viewpoint set, and obtaining a newly added working viewpoint visible grid;
9.5 Adding the class I grid indexes in the newly added visual grid of the working view point to the observed grid set, and updating the grid coverage rate according to the number of the grid indexes in the observed network set and the total number of the target grids;
9.6 Updating the effective viewpoint weightC is a newly added class I grid visibility identifier, grid I is visible as a class I grid by the valid viewpoint j and is not in the observed grid set c is 1, otherwise c is 0;
9.7 Calculating the number of target grids which are the same as each working viewpoint in the working viewpoint set and observe from the effective viewpoints in the working viewpoint set as the number of common-view grids;
calculating the number of the class I grids which can be newly increased compared with the observed grid set as the number of the newly increased class I grids;
If the ratio of the total number of the common view grids to the total number of the effective view observation grids which are not in the working view set and the total number of the effective view observation grids corresponding to the total number of the working view observation grids exceeds the overlapping rate r, adding the effective view into the newly added working view cache set, otherwise, adding the effective view and the nearest point of the effective view in the target point cloud, the working view with the maximum number of the common view grids with the effective view and the nearest point of the working view in the target point cloud into the strabismus view information cache set;
9.8 If the working view point cache set is an empty set, returning to the step 9.1) after the effective oblique view points are newly added in the effective view point set, otherwise, selecting the effective view point with the largest weight from the newly added working view point cache set as the newly added working view point to be added into the working view point set;
9.9 Calculating the ratio of the number of the newly added grids of the visual class I grid of the newly added working view to the total number of the target grids compared with the observed grid set, and taking the ratio as a new increment rate;
9.10 Adding the newly added visual I-type grid index of the working view point into the observed grid set, and updating the grid coverage rate according to the number of the grid indexes and the total number of the target grids in the observed network set;
When the view number in the working view set is smaller than Nmax, the new increment rate is smaller than 0.001, and the grid coverage rate is smaller than f, returning to the step 9.6);
9.11 According to the visible I-class grids of each view point in the working view point set, calculating the observation times of each target grid, and screening the target grids with the grid times of 1 time and 2 times as the grids to be supplemented;
9.12 Sorting the effective viewpoints which are not in the working viewpoint set according to the overlapping number of the class I grids and the grids to be supplemented from large to small, traversing the sorted effective viewpoints until the overlapping rate of the traversed effective viewpoints exceeds the overlapping rate r, stopping traversing, adding the effective viewpoints into the working viewpoint set, observing class I grid indexes according to the effective viewpoints, and updating grids with the observation times of different target grids from 0;
9.13 Repeating the step 9.12) until the target grid observation times do not exist for 1 time and 2 times, or no effective view point meeting the condition of the previous step exists, and outputting a working view point set;
Further, in step 9.8), the step of adding the effective oblique view in the effective view set includes:
the effective viewpoints in the strabismus viewpoint information cache set are ordered according to the number of the common-view grids from more to less;
Traversing the ordered strabismus viewpoint cache set, selecting an effective viewpoint with the nearest point of the effective viewpoint in the target point cloud and the nearest point of the working viewpoint in the target point cloud not being the same point as the newly added effective viewpoint, and stopping traversing;
And establishing a camera strabismus model for the newly added effective viewpoint, calculating visual information of the target grid, and adding the visual information to the effective viewpoint set.
Further, in step 10), the step of generating the unmanned aerial vehicle flight path in the flight space based on the working viewpoint coordinates includes:
10.1 Inputting the flight space and the working viewpoint coordinates of the unmanned aerial vehicle;
10.2 Obtaining a maximum Z coordinate from the working viewpoint coordinate;
10.3 According to Z Y screen projection coordinates, the layer closest to the maximum Z coordinate is used as a transport layer;
10.4 Projecting all the working viewpoints to the transport layer, wherein the projected coordinates are used as working control points;
10.5 Performing work control point path planning;
10.6 Planning a path according to the working control points, and after the unmanned aerial vehicle flies to each working control point in sequence, the unmanned aerial vehicle flies to each working viewpoint corresponding to the control point in a vertical take-off and landing mode to carry out image acquisition, returns to and goes to the next working control point.
The technical effect of the invention is undoubtedly that the invention has the following beneficial effects:
1) Through the viewpoint preprocessing method, the unmanned aerial vehicle can fly close to the target structure, and the image pixel precision is improved
2) By means of the viewpoint preprocessing method, the target grid and the obstacle grid are distinguished, and static obstacle avoidance of the unmanned aerial vehicle is achieved.
3) Through the viewpoint screening method, the reconstruction performance between images is ensured by using a class II grid, and the three-dimensional reconstruction accuracy is ensured by using a class I grid
4) The unmanned aerial vehicle flight path generated by the invention does not need manual intervention, and can automatically perform the work of collecting the close image.
Detailed Description
The present invention is further described below with reference to examples, but it should not be construed that the scope of the above subject matter of the present invention is limited to the following examples. Various substitutions and alterations are made according to the ordinary skill and familiar means of the art without departing from the technical spirit of the invention, and all such substitutions and alterations are intended to be included in the scope of the invention.
Example 1:
referring to fig. 1 to 9, an unmanned aerial vehicle proximity image acquisition method based on incremental reconstruction includes the following steps:
1) Acquiring oblique photographic images and/or multi-view images of a target structure by using an unmanned aerial vehicle, and reconstructing the acquired images into a low-precision image point cloud;
2) Acquiring a three-dimensional grid model of a target structure, and dispersing the three-dimensional grid model into a point cloud of the target structure;
3) Matching the target structure point cloud with the low-precision image point cloud, recording a coordinate transformation matrix M in the matching process, taking the matched target structure point cloud as a target point cloud, and taking the low-precision point cloud with the distance between the target structure point cloud and the target point cloud being more than dmin=10cm as an obstacle point cloud;
4) Respectively constructing a target grid and an obstacle grid based on the target point cloud and the obstacle point cloud;
5) Acquiring a flight space and an image acquisition candidate point set;
6) Establishing a camera main view model at each image acquisition candidate point;
7) Creating a light projection scene according to the target grids, the obstacle grids and the camera models, and calculating the visual information of the target grids under each camera model, wherein the visual information comprises indexes of each visual target grid and the distance between each visual target grid and an unmanned aerial vehicle image acquisition candidate point;
8) Based on the visual information of the target grids, dividing the visual target grids under each camera model into class I grids and class II grids, and screening unmanned aerial vehicle image acquisition candidate points with the number of class I grids being greater than N to be used as an effective visual point set;
9) Acquiring a working viewpoint from the effective viewpoint set;
10 Generating a flight path of the unmanned aerial vehicle in the flight space based on the working viewpoint coordinates;
11 Based on the unmanned aerial vehicle flight path and the camera external parameters of the working viewpoint, calculating the flight parameters of the unmanned aerial vehicle at the working viewpoint, and generating a flight file.
In the step 4), the target point cloud is packaged into a target grid, or the target structure three-dimensional grid transformed by the coordinate transformation matrix M is packaged into the target grid.
In step 5), the step of acquiring the flight space and the image acquisition candidate point set includes:
5.1 Establishing a target grid semi-directional bounding box, expanding the bounding box to the side and the upper side, wherein the expansion size is d= (P/lambda)2pfeq/S35, the side length of the expanded bounding box is upwards taken to be an integral multiple of the hovering precision of the unmanned aerial vehicle, P is a desired point cloud point position tolerance, lambda epsilon [5,10] is a conversion coefficient of pixel precision and point cloud tolerance, P is pixel precision, feq is an equivalent focal length of a camera of 35mm, and S35 is a negative film size of 35 mm;
5.2 Dividing the expansion bounding box into voxels according to a second rule by taking the hovering precision of the k times unmanned aerial vehicle as a distance, and calculating the center point coordinate of each voxel in the voxels to be used as a point set A, wherein k is a natural number not less than 1;
5.3 Dividing the point set A into a point set B if a voxel represented by a point in the point set A contains a target point cloud, dividing the point into a point set C if the voxel contains an obstacle point cloud, and dividing the point into a point set D if the voxel does not contain the target point cloud and the obstacle point cloud;
5.4 According to the third rule, removing the point with the nearest point distance smaller than the safety distance Dsafe from the point set D to the point set B and the point set C to obtain a point set E;
5.5 Removing points right below the point set B and the point set C from the point set E to obtain a point set F;
5.6 Selecting the point cluster with the largest quantity from the point set F as a point set G, wherein the voxel space represented by the point set G is the unmanned plane flight space;
5.7 Removing points with the distance smaller than d from the closest point of the target point cloud from the point set G to obtain a point set H;
5.8 With each point in the point set H as a starting point, pointing to the nearest point in the target point cloud, constructing a sight line, calculating a sight elevation angle, and eliminating points with the sight elevation angle exceeding the maximum elevation angle of the unmanned aerial vehicle holder from the point set H to obtain an unmanned aerial vehicle image acquisition candidate point set.
And 5.1), aligning the height direction of the target grid semi-directional bounding box with the positive direction of the Z axis, wherein the lateral surfaces of the bounding box are perpendicular to the XY plane.
And 5.4), wherein the safety distance dsafe is not smaller than the sum of unmanned aerial vehicle hovering precision, low-precision point cloud estimation precision and unmanned aerial vehicle minimum obstacle avoidance distance.
In the step 6), when each image acquisition candidate point builds a camera main view model, the camera external parameters Pc are matrices of 4 rows and 4 columns, the first 3 columns of the camera external parameters Pc are orthogonal rotation matrices R= [ eta1,η2,η3 ], the last column of the first three rows is an offset column vector T, and the fourth row is [0, 1];
η1,η2,η3 is a unit row vector, η3 is a unit direction vector of points in the image acquisition candidate point set pointing to the nearest point of the points in the target point cloud, and η1 is perpendicular to a Z axis and η3,η2 and perpendicular to η1 and η3 at the same time;
The camera internal parameters are obtained through manufacturer description, calibration plate calibration or during low-precision point cloud reconstruction.
In the step 8), the class I grid is a grid with the distance between the visual target grid and the unmanned aerial vehicle image acquisition candidate point within a threshold d, and the class II grid is a grid with the distance between the visual target grid and the unmanned aerial vehicle image acquisition candidate point outside a threshold d= (P/λ)2pfeq/S35.
In step 9), the step of acquiring the working viewpoints from the set of effective viewpoints includes:
9.1 Initializing the coverage rate of the grid to be 0, and initializing the observed grid set to be an empty set;
9.2 A grid visual order table is calculated, wherein the table comprises grid indexes and the number of times that each grid is respectively used as a class I grid and a class II grid to be observed by an effective viewpointAndAnd go to step 9.3), i is the grid number, if the number isLess than 3, the grid is considered to be insufficiently visible, and before step 9.3) is performedAndAssigning a value of 0, and modifying the visible information of the grid in all the effective viewpoints to be invisible;
9.3 Calculating weights of the effective viewsA is a view point number, a is a class I grid visibility identifier, a is 1 when grid I is seen by an effective view point j as a class I grid, otherwise a is 0;b is a class II grid visibility identifier, b is 1 when grid I is seen by an effective view point j as a class II grid, and otherwise b is 0;
9.4 Taking the viewpoint with the largest weight as a newly added working viewpoint, adding a working viewpoint set, and obtaining a newly added working viewpoint visible grid;
9.5 Adding the class I grid indexes in the newly added visual grid of the working view point to the observed grid set, and updating the grid coverage rate according to the number of the grid indexes in the observed network set and the total number of the target grids;
9.6 Updating the effective viewpoint weightC is a newly added class I grid visibility identifier, grid I is visible as a class I grid by the valid viewpoint j and is not in the observed grid set c is 1, otherwise c is 0;
9.7 Calculating the number of target grids which are the same as each working viewpoint in the working viewpoint set and observe from the effective viewpoints in the working viewpoint set as the number of common-view grids;
calculating the number of the class I grids which can be newly increased compared with the observed grid set as the number of the newly increased class I grids;
If the ratio of the total number of the common view grids to the total number of the effective view observation grids which are not in the working view set and the total number of the effective view observation grids corresponding to the total number of the working view observation grids exceeds the overlapping rate r, adding the effective view into the newly added working view cache set, otherwise, adding the effective view and the nearest point of the effective view in the target point cloud, the working view with the maximum number of the common view grids with the effective view and the nearest point of the working view in the target point cloud into the strabismus view information cache set;
9.8 If the working view point cache set is an empty set, returning to the step 9.1) after the effective oblique view points are newly added in the effective view point set, otherwise, selecting the effective view point with the largest weight from the newly added working view point cache set as the newly added working view point to be added into the working view point set;
9.9 Calculating the ratio of the number of the newly added grids of the visual class I grid of the newly added working view to the total number of the target grids compared with the observed grid set, and taking the ratio as a new increment rate;
9.10 Adding the newly added visual I-type grid index of the working view point into the observed grid set, and updating the grid coverage rate according to the number of the grid indexes and the total number of the target grids in the observed network set;
When the view number in the working view set is smaller than Nmax, the new increment rate is smaller than 0.001, and the grid coverage rate is smaller than f, returning to the step 9.6);
9.11 According to the visible I-class grids of each view point in the working view point set, calculating the observation times of each target grid, and screening the target grids with the grid times of 1 time and 2 times as the grids to be supplemented;
9.12 Sorting the effective viewpoints which are not in the working viewpoint set according to the overlapping number of the class I grids and the grids to be supplemented from large to small, traversing the sorted effective viewpoints until the overlapping rate of the traversed effective viewpoints exceeds the overlapping rate r, stopping traversing, adding the effective viewpoints into the working viewpoint set, observing class I grid indexes according to the effective viewpoints, and updating grids with the observation times of different target grids from 0;
9.13 Repeating the step 9.12) until the target grid observation times do not exist for 1 time and 2 times, or no effective view point meeting the condition of the previous step exists, and outputting a working view point set;
in step 9.8), the step of adding the effective oblique view in the effective view set includes:
the effective viewpoints in the strabismus viewpoint information cache set are ordered according to the number of the common-view grids from more to less;
Traversing the ordered strabismus viewpoint cache set, selecting an effective viewpoint with the nearest point of the effective viewpoint in the target point cloud and the nearest point of the working viewpoint in the target point cloud not being the same point as the newly added effective viewpoint, and stopping traversing;
And establishing a camera strabismus model for the newly added effective viewpoint, calculating visual information of the target grid, and adding the visual information to the effective viewpoint set.
In step 10), the step of generating the unmanned aerial vehicle flight path in the flight space based on the working viewpoint coordinates includes:
10.1 Inputting the flight space and the working viewpoint coordinates of the unmanned aerial vehicle;
10.2 Obtaining a maximum Z coordinate from the working viewpoint coordinate;
10.3 According to Z Y screen projection coordinates, the layer closest to the maximum Z coordinate is used as a transport layer;
10.4 Projecting all the working viewpoints to the transport layer, wherein the projected coordinates are used as working control points;
10.5 Performing work control point path planning;
10.6 Planning a path according to the working control points, and after the unmanned aerial vehicle flies to each working control point in sequence, the unmanned aerial vehicle flies to each working viewpoint corresponding to the control point in a vertical take-off and landing mode to carry out image acquisition, returns to and goes to the next working control point.
Example 2:
An unmanned aerial vehicle proximate image acquisition method based on incremental reconstruction comprises the following steps:
1) Acquiring oblique photographic images and/or multi-view images of a target structure by using an unmanned aerial vehicle, and reconstructing the acquired images into a low-precision image point cloud;
2) Acquiring a three-dimensional grid model of a target structure, and dispersing the three-dimensional grid model into a point cloud of the target structure;
3) Matching the target structure point cloud with the low-precision image point cloud, recording a coordinate transformation matrix M in the matching process, taking the matched target structure point cloud as a target point cloud, and taking the low-precision point cloud with the distance between the target structure point cloud and the target point cloud being more than dmin=10cm as an obstacle point cloud;
4) Respectively constructing a target grid and an obstacle grid based on the target point cloud and the obstacle point cloud;
5) Acquiring a flight space and an image acquisition candidate point set;
6) Establishing a camera main view model at each image acquisition candidate point;
7) Creating a light projection scene according to the target grids, the obstacle grids and the camera models, and calculating the visual information of the target grids under each camera model, wherein the visual information comprises indexes of each visual target grid and the distance between each visual target grid and an unmanned aerial vehicle image acquisition candidate point;
8) Based on the visual information of the target grids, dividing the visual target grids under each camera model into class I grids and class II grids, and screening unmanned aerial vehicle image acquisition candidate points with the number of class I grids being greater than N to be used as an effective visual point set;
9) Acquiring a working viewpoint from the effective viewpoint set;
10 Generating a flight path of the unmanned aerial vehicle in the flight space based on the working viewpoint coordinates;
11 Based on the unmanned aerial vehicle flight path and the camera external parameters of the working viewpoint, calculating the flight parameters of the unmanned aerial vehicle at the working viewpoint, and generating a flight file.
Example 3:
In the method, the technical content is the same as that in embodiment 2, and further, in step 4), the target point cloud is encapsulated into a target grid, or the three-dimensional grid of the target structure transformed by the coordinate transformation matrix M is encapsulated into the target grid.
Example 4:
the unmanned aerial vehicle proximate image acquisition method based on incremental reconstruction has the technical content as in any one of embodiments 2-3, and further, in step 5), the step of acquiring the flight space and the image acquisition candidate point set includes:
5.1 Establishing a target grid semi-directional bounding box, expanding the bounding box to the side and the upper side, wherein the expansion size is d= (P/lambda)2pfeq/S35, the side length of the expanded bounding box is upwards taken to be an integral multiple of the hovering precision of the unmanned aerial vehicle, P is a desired point cloud point position tolerance, lambda epsilon [5,10] is a conversion coefficient of pixel precision and point cloud tolerance, P is pixel precision, feq is an equivalent focal length of a camera of 35mm, and S35 is a negative film size of 35 mm;
5.2 Dividing the expansion bounding box into voxels according to a second rule by taking the hovering precision of the k times unmanned aerial vehicle as a distance, and calculating the center point coordinate of each voxel in the voxels to be used as a point set A, wherein k is a natural number not less than 1;
5.3 Dividing the point set A into a point set B if a voxel represented by a point in the point set A contains a target point cloud, dividing the point into a point set C if the voxel contains an obstacle point cloud, and dividing the point into a point set D if the voxel does not contain the target point cloud and the obstacle point cloud;
5.4 According to the third rule, removing the point with the nearest point distance smaller than the safety distance Dsafe from the point set D to the point set B and the point set C to obtain a point set E;
5.5 Removing points right below the point set B and the point set C from the point set E to obtain a point set F;
5.6 Selecting the point cluster with the largest quantity from the point set F as a point set G, wherein the voxel space represented by the point set G is the unmanned plane flight space;
5.7 Removing points with the distance smaller than d from the closest point of the target point cloud from the point set G to obtain a point set H;
5.8 With each point in the point set H as a starting point, pointing to the nearest point in the target point cloud, constructing a sight line, calculating a sight elevation angle, and eliminating points with the sight elevation angle exceeding the maximum elevation angle of the unmanned aerial vehicle holder from the point set H to obtain an unmanned aerial vehicle image acquisition candidate point set.
Example 5:
the unmanned aerial vehicle proximate image acquisition method based on incremental reconstruction has the technical content as in any one of embodiments 2-4, and further, in step 5.1), the height direction of the target grid semi-directional bounding box is aligned with the positive direction of the Z axis, and the lateral surfaces of the bounding box are all perpendicular to the XY plane.
Example 6:
the unmanned aerial vehicle proximate image acquisition method based on incremental reconstruction has the technical content as in any one of embodiments 2-5, and further, the safety distance dsafe in step 5.4) is not smaller than the sum of unmanned aerial vehicle hovering precision, low-precision point cloud estimated precision and unmanned aerial vehicle minimum obstacle avoidance distance.
Example 7:
In the further step 6), when a camera main view model is built at each image acquisition candidate point, the camera external parameters Pc are matrices of 4 rows and 4 columns, the first 3 rows and the first 3 columns in the camera external parameters Pc are orthogonal rotation matrices R= [ eta1,η2,η3 ], the last three rows and the last column are offset column vectors T, and the fourth row is [0, 1];
η1,η2,η3 is a unit row vector, η3 is a unit direction vector of points in the image acquisition candidate point set pointing to the nearest point of the points in the target point cloud, and η1 is perpendicular to a Z axis and η3,η2 and perpendicular to η1 and η3 at the same time;
The camera internal parameters are obtained through manufacturer description, calibration plate calibration or during low-precision point cloud reconstruction.
Example 8:
The unmanned aerial vehicle proximate image acquisition method based on incremental reconstruction has the technical content as in any one of embodiments 2-7, and further in the step 8), the class I grid is a grid with a distance between a visual target grid and an unmanned aerial vehicle image acquisition candidate point within a threshold d, and the class II grid is a grid with a distance between the visual target grid and the unmanned aerial vehicle image acquisition candidate point outside the threshold d.
Example 9:
The unmanned aerial vehicle proximate image acquisition method based on incremental reconstruction has the technical content as in any one of embodiments 2-8, and further, in step 9), the step of acquiring the working view point from the effective view point set includes:
9.1 Initializing the coverage rate of the grid to be 0, and initializing the observed grid set to be an empty set;
9.2 A grid visual order table is calculated, wherein the table comprises grid indexes and the number of times that each grid is respectively used as a class I grid and a class II grid to be observed by an effective viewpointAndAnd go to step 9.3), i is the grid number, if the number isLess than 3, the grid is considered to be insufficiently visible, and before step 9.3) is performedAndAssigning a value of 0, and modifying the visible information of the grid in all the effective viewpoints to be invisible;
9.3 Calculating weights of the effective viewsA is a view point number, a is a class I grid visibility identifier, a is 1 when grid I is seen by an effective view point j as a class I grid, otherwise a is 0;b is a class II grid visibility identifier, b is 1 when grid I is seen by an effective view point j as a class II grid, and otherwise b is 0;
9.4 Taking the viewpoint with the largest weight as a newly added working viewpoint, adding a working viewpoint set, and obtaining a newly added working viewpoint visible grid;
9.5 Adding the class I grid indexes in the newly added visual grid of the working view point to the observed grid set, and updating the grid coverage rate according to the number of the grid indexes in the observed network set and the total number of the target grids;
9.6 Updating the effective viewpoint weightC is a newly added class I grid visibility identifier, grid I is visible as a class I grid by the valid viewpoint j and is not in the observed grid set c is 1, otherwise c is 0;
9.7 Calculating the number of target grids which are the same as each working viewpoint in the working viewpoint set and observe from the effective viewpoints in the working viewpoint set as the number of common-view grids;
calculating the number of the class I grids which can be newly increased compared with the observed grid set as the number of the newly increased class I grids;
If the ratio of the total number of the common view grids to the total number of the effective view observation grids which are not in the working view set and the total number of the effective view observation grids corresponding to the total number of the working view observation grids exceeds the overlapping rate r, adding the effective view into the newly added working view cache set, otherwise, adding the effective view and the nearest point of the effective view in the target point cloud, the working view with the maximum number of the common view grids with the effective view and the nearest point of the working view in the target point cloud into the strabismus view information cache set;
9.8 If the working view point cache set is an empty set, returning to the step 9.1) after the effective oblique view points are newly added in the effective view point set, otherwise, selecting the effective view point with the largest weight from the newly added working view point cache set as the newly added working view point to be added into the working view point set;
9.9 Calculating the ratio of the number of the newly added grids of the visual class I grid of the newly added working view to the total number of the target grids compared with the observed grid set, and taking the ratio as a new increment rate;
9.10 Adding the newly added visual I-type grid index of the working view point into the observed grid set, and updating the grid coverage rate according to the number of the grid indexes and the total number of the target grids in the observed network set;
When the view number in the working view set is smaller than Nmax, the new increment rate is smaller than 0.001, and the grid coverage rate is smaller than f, returning to the step 9.6);
9.11 According to the visible I-class grids of each view point in the working view point set, calculating the observation times of each target grid, and screening the target grids with the grid times of 1 time and 2 times as the grids to be supplemented;
9.12 Sorting the effective viewpoints which are not in the working viewpoint set according to the overlapping number of the class I grids and the grids to be supplemented from large to small, traversing the sorted effective viewpoints until the overlapping rate of the traversed effective viewpoints exceeds the overlapping rate r, stopping traversing, adding the effective viewpoints into the working viewpoint set, observing class I grid indexes according to the effective viewpoints, and updating grids with the observation times of different target grids from 0;
9.13 Repeating step 9.12) until the number of target grid observations is not 1 time and 2 times, or no effective viewpoint satisfying the condition of the previous step exists, and outputting a working viewpoint set, embodiment 10:
the unmanned aerial vehicle proximate image acquisition method based on incremental reconstruction has the technical content as in any one of embodiments 2-9, further, in step 9.8), the step of adding a new effective oblique viewpoint in the effective viewpoint set comprises:
the effective viewpoints in the strabismus viewpoint information cache set are ordered according to the number of the common-view grids from more to less;
Traversing the ordered strabismus viewpoint cache set, selecting an effective viewpoint with the nearest point of the effective viewpoint in the target point cloud and the nearest point of the working viewpoint in the target point cloud not being the same point as the newly added effective viewpoint, and stopping traversing;
And establishing a camera strabismus model for the newly added effective viewpoint, calculating visual information of the target grid, and adding the visual information to the effective viewpoint set.
Example 11:
The method for acquiring a proximate image of an unmanned aerial vehicle based on incremental reconstruction includes the following steps according to any one of embodiments 2 to 10, and in step 10), generating a flight path of the unmanned aerial vehicle in a flight space based on a working viewpoint coordinate includes:
10.1 Inputting the flight space and the working viewpoint coordinates of the unmanned aerial vehicle;
10.2 Obtaining a maximum Z coordinate from the working viewpoint coordinate;
10.3 According to Z Y screen projection coordinates, the layer closest to the maximum Z coordinate is used as a transport layer;
10.4 Projecting all the working viewpoints to the transport layer, wherein the projected coordinates are used as working control points;
10.5 Performing work control point path planning;
10.6 Planning a path according to the working control points, and after the unmanned aerial vehicle flies to each working control point in sequence, the unmanned aerial vehicle flies to each working viewpoint corresponding to the control point in a vertical take-off and landing mode to carry out image acquisition, returns to and goes to the next working control point.
Example 12:
An unmanned aerial vehicle proximate image acquisition method based on incremental reconstruction comprises the following steps:
1) Performing oblique photography or multi-view image acquisition on a target structure by adopting an unmanned plane and reconstructing the target structure into a low-precision point cloud;
2) Dispersing the three-dimensional grid model of the target structure into point clouds, matching the point clouds with low precision, recording a coordinate transformation matrix M in the matching process, taking the matched point clouds of the target structure as target point clouds, and taking the low precision point clouds with the distance of more than 10cm from the target point clouds as obstacle point clouds;
3) Packaging the target point cloud into a target grid, or taking the three-dimensional grid of the target structure transformed by the coordinate transformation matrix M as the target grid, and packaging the obstacle point cloud into an obstacle grid;
4) Acquiring a flight space and an image acquisition candidate point set;
5) Establishing a camera main view model at each image acquisition candidate point;
6) Creating a light projection scene according to the target grids, the obstacle grids and the camera models, and calculating the visual information of the target grids under each camera model, wherein the visual information comprises indexes of each visual target grid and the distance between each visual target grid and an unmanned aerial vehicle image acquisition candidate point;
7) Dividing the visual target grids under each camera model into I-type grids and II-type grids according to the visual information of the target grids, and screening unmanned aerial vehicle image acquisition candidate points with the number of the I-type grids being greater than N to be used as an effective visual point set;
8) Acquiring a working viewpoint from the effective viewpoint set;
9) Generating a flight path of the unmanned aerial vehicle in the flight space based on the working viewpoint coordinates;
10 According to the unmanned aerial vehicle flight path and the camera external parameters of the working viewpoint, calculating the unmanned aerial vehicle flight parameters at the working viewpoint, and generating a flight file according to the selected unmanned aerial vehicle flight file format.
The method for acquiring the flying space and the image acquisition candidate point set comprises the following specific steps of:
4.1 According to a first rule, establishing a target grid semi-directional bounding box, expanding the bounding box to the side and the upper side by d, and taking the side length of the expanded bounding box upwards as integral multiple of the hovering precision of the unmanned aerial vehicle;
4.2 According to the second rule, dividing the expansion bounding box into voxels by taking the hovering precision of the k times unmanned aerial vehicle as a distance, and calculating the center point coordinate of each voxel in the voxels to be used as a point set A;
4.3 Dividing the point set A, namely dividing the point set A into a point set B when the point cloud of the target point is contained in voxels represented by points in the point set A, dividing the point set C when the point cloud of the obstacle is contained, and dividing the point set D when the point cloud is not contained;
4.4 According to the third rule, removing points with the closest point distance smaller than the safety distance Dsafe from the point set B and the point set C to obtain a point set E;
4.5 Removing points right below the point set B and the point set C from the point set E to obtain a point set F;
4.6 Selecting the point cluster with the largest quantity from the point set F as a point set G, wherein the voxel space represented by the point set G is the unmanned plane flight space;
4.7 Removing points with the distance smaller than d from the closest point of the target point cloud from the point set G to obtain a point set H;
4.8 Taking each point in the point set H as a starting point, pointing to the nearest point of the point set H in the target point cloud, constructing a sight line, calculating a sight elevation angle, and eliminating the point with the sight elevation angle exceeding the maximum elevation angle of the unmanned aerial vehicle holder from the point set H to obtain an unmanned aerial vehicle image acquisition candidate point set;
The first rule comprises that the height direction of the semi-directional bounding box is aligned with the positive direction of the Z axis, the lateral surfaces of the bounding box are perpendicular to the XY plane, and the viewpoint preprocessing method expands d= (P/lambda)2pfeq/S35. Wherein P is the expected point cloud point position tolerance, lambda is the conversion coefficient of pixel precision and point cloud tolerance, the value is taken in the interval [5,10], P is the pixel precision, feq is the equivalent focal length of the camera 35mm, S35 is the negative film size 35mm, and 36mmx24mm is taken
The second rule comprises that the hovering precision of the k times unmanned aerial vehicle is k, wherein k is a natural number not smaller than 1;
The third rule comprises that the safety distance dsafe is not smaller than the hovering precision of the unmanned aerial vehicle, and the sum of the estimated precision of the low-precision point cloud and the minimum obstacle avoidance distance of the unmanned aerial vehicle in the step 1
A camera main view model is built at each image acquisition candidate point, a camera external parameter Pc is a matrix of 4 rows and 4 columns, the first 3 rows and the first 3 columns in Pc are orthogonal rotation matrices R= [ eta1,η2,η3],η1,η2,η3 ] are unit row vectors, eta3 is a unit direction vector of points in the image acquisition candidate point set pointing to the nearest point of the points in a target point cloud, eta1 is perpendicular to a Z axis and eta3,η2 at the same time and perpendicular to the first three rows and the last three columns T in eta1 and eta3;Pc at the same time, the offset column vector is obtained by multiplying R and the coordinate points of the image acquisition candidate points and taking negative values, and a fourth row [0, 1] in Pc is obtained by a camera internal parameter through manufacturer description, calibration plate calibration or low-precision point cloud reconstruction.
Dividing a visual target grid under each camera model into a class I grid and a class II grid according to the visual information of the target grids, screening unmanned aerial vehicle image acquisition candidate points with the number of the class I grids being larger than N as an effective visual point set, wherein the class I grid refers to the grid with the distance between the visual target grid and the unmanned aerial vehicle image acquisition candidate points within a threshold d, and the class II grid refers to the grid with the distance between the visual target grid and the unmanned aerial vehicle image acquisition candidate points outside the threshold d
The method comprises the specific steps of obtaining the working view points from the effective view point set, wherein the specific steps comprise:
a) Initializing the grid coverage to 0, initializing the observed grid set to be an empty set
B) Calculating a grid visual order table, wherein the table comprises grid indexes and the number of times that each grid is respectively used as a class I grid and a class II grid to be observed by an effective viewpointAndI is the grid number, whenIf less than 3, the grid is deemed to be insufficiently visible,AndA value of 0 is assigned and the visual information for the grid in all valid views is modified to be invisible.
C) Calculating weights for effective viewsWherein j is a view number, a is a type I grid visibility identifier, a is 1 when grid I is seen by an effective view j as a type I grid, otherwise a is 0;b is a type II grid visibility identifier, b is 1 when grid I is seen by an effective view j as a type II grid, otherwise b is 0, and gamma is a real number which is less than 1 and more than 0 as a type II grid weight correction parameter.
D) And taking the viewpoint with the largest weight as a newly added working viewpoint, adding the newly added working viewpoint into the working viewpoint set, and obtaining the newly added working viewpoint visible grid.
E) Adding the class I grid index in the newly added visual grid of the working view point to the observed grid set, and updating the grid coverage rate according to the grid index number and the total number of target grids in the observed network set.
F) Updating effective viewpoint weightsC is a newly added class I grid visibility identifier, grid I is visible as a class I grid by the active view j and is 1 when not in the observed grid set, otherwise c is 0.
G) And sequencing the effective viewpoints according to the weight of the effective viewpoints from large to small, sequentially calculating the number of the target grids which are observed by each working viewpoint in the current traversal effective viewpoints and the working viewpoints in the working viewpoint set to be used as the common view grid number, and calculating the number of the class I grids which can be newly increased compared with the observed grid set to be used as the newly increased class I grid number. When the ratio of the total number of the common view grids to the total number of the current traversed effective view point observation grids and the total number of the corresponding working view point observation grids exceeds the overlapping rate r, adding the effective view point into the newly added working view point cache set and stopping traversing, otherwise, adding the effective view point and the nearest point thereof in the target point cloud, and the working view point with the maximum total number of the common view grids with the effective view point and the nearest point thereof in the target point cloud into the strabismus view point information cache set.
H) And (c) when the working view cache set is an empty set, returning to the step a) after adding the effective oblique view to the effective view set, otherwise, adding the effective view in the newly added working view cache set to the working view set as the newly added working view.
I) And calculating the ratio of the number of the newly added grids of the visual class I grid of the newly added working view to the total number of the target grids compared with the observed grid set, and taking the ratio as a new addition rate.
J) Adding the visual I-type grid index of the newly added working view point into the observed grid set, and updating the grid coverage rate according to the number of the grid indexes and the total number of the target grids in the observed network set.
K) And (3) repeating the steps f) to j) when the viewpoint number in the working viewpoint set is smaller than Nmax, the new increment rate is smaller than 0.001 and the grid coverage rate is smaller than f.
L) according to the visible I-class grids of each view point in the working view point set, calculating the observation times of each target grid, and screening the target grids with the grid times of 1 time and 2 times as the grids to be supplemented.
M) sorting the effective viewpoints which are not in the working viewpoint set according to the overlapping number of the I-type grids and the grids which need to be supplemented from large to small, traversing the sorted effective viewpoints until the overlapping rate of the traversed effective viewpoints exceeds the overlapping rate r, stopping traversing, adding the effective viewpoints into the working viewpoint set, observing the I-type grid index according to the effective viewpoints, and updating the grids with the observation times of each target grid not being 0
N) repeating the step m) until the target grid observation times do not exist for 1 time and 2 times, or the effective view points meeting the condition of the step m) do not exist, and outputting a working view point set.
The fourth rule comprises the steps of sorting the effective viewpoints in the strabismus viewpoint information cache set according to the number of common-view grids from more to less, traversing the sorted strabismus viewpoint cache set, selecting the effective viewpoints of which the nearest points of the effective viewpoints in the target point cloud and the nearest points of the working viewpoints in the target point cloud are different points as new added effective viewpoints, stopping traversing, establishing a camera strabismus model for the new added effective viewpoints, calculating target grid visual information, and adding the target grid visual information into the effective viewpoints set. The camera strabismus model is similar to the camera main vision model, and η3 is a unit direction vector of the nearest point of the newly added effective viewpoint pointing to the working viewpoint in the target point cloud.
Generating a flight path of the unmanned aerial vehicle in a flight space based on the working viewpoint coordinates, wherein the method comprises the following specific steps of:
Inputting unmanned aerial vehicle flight space and working viewpoint coordinates;
Obtaining a maximum Z coordinate from the working viewpoint coordinate;
Cutting a layer closest to the maximum Z coordinate according to the post-projection coordinate of Z Y screens to be used as a transfer layer;
projecting all the working viewpoints to the transfer layer, wherein the projected coordinates are used as working control points;
planning a working control point path;
and planning a path according to the working control points, and after the unmanned aerial vehicle flies to each working control point in sequence, the unmanned aerial vehicle flies to each working viewpoint corresponding to the control point in a vertical take-off and landing mode to acquire images, and returns to and goes to the next working control point.
Example 13:
An unmanned aerial vehicle proximate image acquisition method based on incremental reconstruction comprises the following steps:
101, carrying out multi-view image acquisition on a structure by adopting an unmanned plane, and reconstructing the structure into a low-precision point cloud;
102, dispersing a three-dimensional grid model of a target structure 1 in the structure into point clouds, matching the point clouds with low-precision point clouds, recording a coordinate transformation matrix M in the matching process, taking the matched target structure point clouds as target point clouds 2, and taking the low-precision point clouds with the distance from the target point clouds being more than 10cm as obstacle point clouds 3;
Packaging the target point cloud into a target grid 4, and packaging the obstacle point cloud 3 into an obstacle grid 10;
104, acquiring a flight space 7 and an image acquisition candidate point set;
105, establishing a camera main view model at each image acquisition candidate point;
106, creating a light projection scene according to the target grids, the obstacle grids and the camera models, and calculating the visual information of the target grids under each camera model;
107, according to the visual information of the target grids, classifying the grids with the visual target grids and the unmanned aerial vehicle image acquisition candidate points within a threshold gamma into I-type grids, classifying the grids with the visual target grids and the unmanned aerial vehicle image acquisition candidate points outside the threshold gamma into II-type grids, and screening unmanned aerial vehicle image acquisition candidate points with the I-type grids more than N as an effective visual point set 8;
108, acquiring a working viewpoint 9;
109, generating an unmanned aerial vehicle flight path;
And 110, calculating flight parameters of the unmanned aerial vehicle at the working viewpoint according to the flight path of the unmanned aerial vehicle and camera external parameters of the working viewpoint, and generating a flight file according to the selected flight file format of the unmanned aerial vehicle.
According to the method, the unmanned aerial vehicle can fly close to the target structure through the viewpoint preprocessing method, the image pixel precision is improved, the target grid and the obstacle grid are distinguished through the viewpoint preprocessing method, the static obstacle avoidance of the unmanned aerial vehicle is achieved, the reconstructability among images is ensured through the viewpoint screening method by using the II-type grid, the three-dimensional reconstruction precision is ensured by using the I-type grid, and the unmanned aerial vehicle flight path generated through the method does not need manual intervention and can automatically carry out close image acquisition work.
In this embodiment the structure is a steel truss and the target structure 1 is a truss top-level bar, in other embodiments the structure and target structure are determined according to the task requirements.
In the embodiment, the elevation angle of the unmanned aerial vehicle is limited to 30 degrees, the safety obstacle avoidance distance is 0.5m, the hovering precision is 0.2m, and in other embodiments, the unmanned aerial vehicle is different according to different technical parameters of the model;
In this embodiment, the structure is manually subjected to multi-view image acquisition, and in other embodiments, oblique photography may be employed to automatically perform image acquisition on the structure;
Step 104, acquiring a flight space and an image acquisition candidate point set by adopting a candidate point generation method, wherein the step 1041 to step 1048 specifically comprise the steps of:
Step 1041, establishing a semi-directional bounding box 5 and an inflated bounding box 6 of the target mesh 4.
Specifically, the semi-directional bounding box 5 is aligned with the positive direction of the Z axis in the height direction, the lateral surfaces of the bounding box are perpendicular to the XY screen, the semi-directional bounding box is inflated to the lateral and upper directions d to be used as an inflated bounding box 6,
It should be noted that d is determined by a formula d= (P/λ)2pfeq/S35, where P is a desired point-cloud point position tolerance, in this embodiment, 6mm is taken, λ can be verified by an embodiment, generally 5-10 is taken, feq is 35mm equivalent focal length of a camera on the unmanned aerial vehicle, S35 is 35mm negative film size, and in this embodiment 36mmx24mm is taken.
When the unmanned aerial vehicle hovering precision is required to be described, the side length of the inflated bounding box is upwards taken to be integral multiple of the unmanned aerial vehicle hovering precision.
And 1042, dividing the expansion bounding box into voxels by taking the hovering precision of the unmanned aerial vehicle as the interval, and calculating the coordinates of the voxel center point to be used as a point set A.
In this embodiment, k takes a value of 2, and in other embodiments, k may take an integer of not less than 2.
Step 1043, dividing the point set A into a point set B, a point set C and a point set D.
Specifically, the voxels represented by the points in the point set a are classified into the point set B when the point cloud is included, classified into the point set C when the obstacle point cloud is included, and classified into the point set D when the point cloud is not included.
And 1044, removing points with the closest point distance smaller than the safety distance Dsafe from the point set D and the point set B to obtain the point set E.
Specifically, the safety distance dsafe is not less than the sum of the hovering precision of the unmanned aerial vehicle, the estimated precision of the low-precision point cloud in the step 1 and the minimum obstacle avoidance distance of the unmanned aerial vehicle.
And 1045, removing points right below the point set B and the point set C from the point set E to obtain a point set F.
Step 1046, selecting the most number of point clusters from the point set F as a point set G, wherein the voxel space represented by the point set G is the unmanned aerial vehicle flight space 7.
And 1047, removing the point with the closest point distance smaller than the pixel precision threshold dpre from the point set G to obtain a point set H.
And 1048, taking each point in the point set H as a starting point, pointing to the nearest point in the target point cloud 2, constructing a sight line, calculating the sight elevation angle, and removing the point with the sight elevation angle exceeding the maximum elevation angle of the unmanned aerial vehicle cloud deck from the point set H to obtain an unmanned aerial vehicle image acquisition candidate point set.
If the view angle is close to the maximum view angle of the unmanned aerial vehicle holder, the unmanned aerial vehicle wing appears in the view range, and the maximum view angle of the unmanned aerial vehicle holder should be reduced.
In step 105, in each image acquisition candidate point, a camera main view model is built, camera external parameters Pc are matrices of 4 rows and 4 columns, the first 3 rows and the first 3 columns in Pc are orthogonal rotation matrices R= [ eta1,η2,η3],η1,η2,η3 ] are unit row vectors, eta3 is a unit direction vector of points in the image acquisition candidate point set pointing to the nearest point of the points in a target point cloud, eta1 is perpendicular to a Z axis and eta3,η2 at the same time and is perpendicular to the first three rows and the last column T in eta1 and eta3;Pc at the same time, and the offset column vectors are obtained by multiplying R and the coordinate points of the image acquisition candidate points and taking negative values;
in this embodiment, the camera internal parameters are acquired during the reconstruction of the low-precision point cloud, and in other embodiments, the camera internal parameters may also be acquired by a manufacturer description or calibration mode of the calibration board.
In step 106, a light projection scene is created according to the target grid 4, the obstacle grid 10 and the camera models, and visual information of the target grid 4 under each camera model is calculated, wherein the visual information comprises a visual target grid index and a distance between each visual target grid and an unmanned aerial vehicle image acquisition candidate point.
In this embodiment, the Open3d Open source library is used to create a ray projection scene, and in other embodiments, other ways may be used to create a ray projection scene.
On the basis of the above embodiment, in the present embodiment, the step of acquiring the working viewpoint 9 specifically includes steps 10801 to 10814:
Step 10801, initializing the grid coverage to 0 and initializing the observed grid set to an empty set.
Step 10802, calculating the grid visual order table.
Specifically, the visual subsurface table contains a grid index and the number of times each grid is observed by the effective viewpoint as a class I grid and a class II grid, respectivelyAndI is the grid number, whenIf less than 3, the grid is deemed to be insufficiently visible,AndA value of 0 is assigned and the visual information for the grid in all valid views is modified to be invisible.
Step 10803, calculating the weight Wj of the effective viewpoint.
Specifically, the formula is as follows:
Determining, wherein j is a view number, a is a type I grid visibility identifier, a is 1 when grid I is seen by an effective view j as a type I grid, otherwise a is 0;b is a type II grid visibility identifier, b is 1 when grid I is seen by an effective view j as a type II grid, otherwise b is 0, and gamma is a real number less than 1 and greater than 0 as a type II grid weight correction parameter.
And 10804, taking the viewpoint with the largest weight as a newly added working viewpoint, adding the newly added working viewpoint into the working viewpoint 9 set, and obtaining the newly added working viewpoint visible grid.
And 10805, adding the class I grid index in the visual grid of the newly added working view point into the observed grid set, and updating the grid coverage rate according to the number of the grid indexes in the observed network set and the total number of the target grids.
Step 10806, updating the effective viewpoint weight Wj.
Specifically, the formula is as follows:
Determining, wherein c is a newly added class I grid visibility identifier, grid I being visible to the active viewpoint j as a class I grid and c being 1 when not in the observed grid set, otherwise c being 0.
Step 10807, adding the effective view points meeting the overlapping rate requirement into the newly added working view point information cache set, and adding the effective view points not meeting the overlapping rate requirement and the nearest points thereof in the target point cloud into the strabismus view point information cache set.
Specifically, the effective viewpoints are sequenced from large to small according to the effective viewpoint weights, the number of target grids which are the same as each working viewpoint in the current traversal effective viewpoint and the working viewpoint set and observed is sequentially calculated to be used as the common view grid number, and the number of I-type grids which can be newly increased compared with the observed grid set is calculated to be used as the newly increased I-type grid number. When the ratio of the total number of the common view grids to the total number of the current traversed effective view point observation grids and the total number of the corresponding working view point observation grids exceeds the overlapping rate r, adding the effective view point into the newly added working view point cache set and stopping traversing, otherwise, adding the effective view point and the nearest point thereof in the target point cloud, and the working view point with the maximum total number of the common view grids with the effective view point and the nearest point thereof in the target point cloud into the strabismus view point information cache set.
In this embodiment, r takes a value of 0.6, and in other embodiments, r may take a value of less than 1.
Step 10808, when the working view cache set is an empty set, adding an effective oblique view to the effective view set, returning to step 10801, otherwise, selecting the effective view with the largest weight from the newly added working view cache set as the newly added working view to be added to the working view set.
Specifically, the effective strabismus viewpoint is determined according to steps 108081 to 108083:
Step 108081, sorting the effective viewpoints in the strabismus viewpoint information cache set according to the number of common-view grids from more to less
Step 108082, traversing the ordered strabismus viewpoint cache set, selecting an effective viewpoint with the nearest point of the effective viewpoint in the target point cloud and the nearest point of the working viewpoint in the target point cloud being different points, as a new effective viewpoint, and stopping traversing
And 108083, establishing a camera strabismus model for the newly added effective viewpoint, calculating visual information of the target grid, and adding the visual information to the effective viewpoint set. The camera strabismus model is similar to the camera main view model described in step 105, except η3 is a unit direction vector of the nearest point of the effective viewpoint in the target point cloud described in the newly added effective viewpoint pointing step 10807.
Step 10809, calculating the ratio of the number of the newly added grids of the visual class I grid of the newly added working view compared with the observed grid set to the total number of the target grids as the new addition rate.
Step 10810 adds the newly added visual class I grid index to the observed grid set, and updates the grid coverage according to the number of grid indexes and the total number of target grids in the observed network set.
Step 10811, repeating steps 10806-10810 when the number of views in the working view-set is less than Nmax, the new-rate is less than the threshold, and the grid coverage is less than f.
In this embodiment, the value of Nmax is 500, the value of the new rate threshold is 0.001, the value of f is 0.8, and in other embodiments, Nmax and the new rate threshold may be calculated according to a computer, and f is greater than 0 and not greater than 1.
Step 10812, according to the visible I-class grids of each view point in the working view point set, calculating the observation times of each target grid, and screening the target grids with the grid times of 1 time and 2 times as the grids to be supplemented.
Step 10813, sorting the effective viewpoints which are not in the working viewpoint set according to the overlapping number of the I-type grids and the grids to be supplemented from large to small, traversing the sorted effective viewpoints until the overlapping rate of the traversed effective viewpoints exceeds the overlapping rate r, stopping traversing, adding the effective viewpoints into the working viewpoint set, observing the I-type grid index according to the effective viewpoints, and updating the grids with the observing times of each target grid not being 0
Step 10814, repeating step 10813 until the target grid observation times are not 1 time and 2 times, or no effective view points meeting the conditions of step 10813 exist, and outputting the working view point 9 set.
Based on the above embodiment, in this embodiment, the generation of the unmanned aerial vehicle flight path 12 includes steps 1091 to 1096:
Step 1091, inputting the coordinates of the unmanned aerial vehicle flight space 7 and the working viewpoint 9
Step 1092 obtaining the maximum Z coordinate from the 9 coordinates of the work viewpoint
And 1093, layering the flight space according to the Z coordinate value, and selecting a layer which is not smaller than the maximum Z coordinate and contains coordinates after all voxels are projected to the XY screen, and is closest to the maximum Z coordinate as a transfer layer.
Step 1094, projecting all the working viewpoints to the transport layer, and taking the projected coordinates as working control points 11;
Step 1095, performing path planning of the work control point 11;
In this embodiment, an adaptive large neighborhood search Algorithm (ALNS) is used to perform work control point path planning, and in other embodiments, other trip problem algorithms may be used to perform control point path planning.
Step 1096, planning a path according to the working control points 11, after the unmanned aerial vehicle flies to each working control point in turn, taking off and landing vertically and flying to each working viewpoint corresponding to the control point to acquire images, returning to and going to the next working control point until all the working viewpoints are covered, and the flying path of the unmanned aerial vehicle in the whole process is the flying path 12 of the unmanned aerial vehicle.
On the basis of the above embodiment, in this embodiment, the flight parameters include longitude, latitude, altitude, heading angle and pitch angle of the unmanned aerial vehicle at the working viewpoint.