Panoramic imaging method based on space virtual reality cameraTechnical Field
The invention relates to a panoramic imaging method based on a space virtual reality camera.
Background
With the development of image sensing technology and digital processing technology in the aerospace field, space cameras are increasingly applied in space, a plurality of space cameras are usually adopted to acquire space images, the space background is complex, and the method of increasing the number of the space cameras to make up for insufficient space view field is easy to cause load resource shortage. The common fisheye image scanning method is not in accordance with human visual habits because severe barrel-shaped or pillow-shaped distortion is accompanied when a large-view-field space image is obtained by a common fisheye camera, and the traditional fisheye image scanning method needs to strictly scan one by one from top to bottom and from left to right, so that the calculation amount is large and the efficiency is low. The space environment is complex, the traditional space fisheye image correction has strict requirements on internal and external parameters and distortion models of a space camera, and the distortion models are complex and large in calculated amount. Space cameras generally do not provide an omnidirectional, immersive panoramic experience, and require multiple angles and multiple cameras to acquire different field-of-view space images.
Disclosure of Invention
The invention aims to provide a panoramic imaging method based on a space virtual reality camera.
The technical scheme for realizing the aim of the invention is a panoramic imaging method based on a space virtual reality camera,
the method comprises the following steps: building a space virtual reality camera panoramic imaging system, covering a 360-degree visual angle by adopting four large-visual-field fisheye lenses on a satellite, and collecting a space large-visual-field fisheye image;
step two: extracting an effective circular area of the fisheye image with the large space field of view; k = MN-4R for effective imaging area2 Performing secondary linear scanning, wherein M and N are the size of a neighborhood window of the spatial fisheye image, and R is the radius of an effective area of the spatial fisheye image with a large view field;
step three: if the radius R1 calculated through the horizontal tangent point is different from the radius R2 calculated through the vertical tangent point in the effective circle area extracted in the step two, correcting the effective circle area of the fisheye image with the large space view field;
step four: carrying out reverse mapping distortion correction based on an equidistant projection geometric correction model on the effective circular area of the fisheye image with the large space field obtained in the third step to obtain a space image according with human visual habits; the equidistant projection geometric correction model is an imaging relation of object points on an ideal imaging plane and the fisheye lens imaging curved surface.
In the second step, the effective imaging area is linearly scanned based on an improved linear scanning method:
wherein: ktop 、Kbottom 、Kleft 、Kright The times of scanning in the upper, lower, left and right directions in the whole scanning process are respectively obtained, and X and Y are the times of scanning in the X direction and the Y direction; y istop 、Ybottom 、Xleft 、Xright The scanning times of the upper, lower, left and right directions of the effective imaging area are respectively obtained.
In the third step, the method for correcting the effective circle area of the fisheye image with the large space view field comprises the following steps:
setting a pixel brightness difference threshold T, calculating the difference Isub between the maximum brightness value Imax and the minimum brightness value Imin of the pixel on the scanning line, and searching coordinates A of four tangent lines at the edge of the fisheye image with the large space field and intersection points of the four tangent linesleft (x1 ,y1 )、Aright (x2 ,y2 )、Btop (x3 ,y3 )、Bbottom (x4 ,y4 ) The values of R1 and R2 are calculated according to the following formula:
I=0.59r+0.11g+0.3b
R=max{R1,R2}
wherein, the threshold T satisfies that T is more than or equal to 20 and less than or equal to 50, I is the brightness of each pixel point on the scanning line, r, g and b are three channel color values of red, green and blue, the range is 0-255, and the geometric center point O (x) of the circular effective area0 ,y0 );
When R1 ≠ R2, multiply by M-1 And (3) correcting the circular area of the fisheye image:
where (u, v) is the pixel coordinate of the center point of the fisheye image, and α is the ratio of the total number of row pixels to the total number of column pixels of the fisheye image.
In the third step, the method for determining the coordinates of the four tangent lines at the edge of the fisheye image with the large space view field and the intersection points of the four tangent lines comprises the following steps:
setting the brightness difference of pixels in a certain row/column of the space large-field fisheye image as Isub Respectively scanning the original fisheye image by an upper group of lines, a lower group of lines, a left group of lines and a right group of lines; the pixel is reduced by 1 when the scanning line in the up, down, left and right directions is scanned once; the whole scanning process is performed with K in four directionstop 、Kbottom 、Kright 、Kleft Secondary scanning; when the scanning line is Isub &T, recording the position of the scanning Line as Line0, then calculating the brightness difference of the adjacent scanning lines, and continuously comparing with a threshold value; maximum brightness difference I of the lower adjacent scanning Line1 in the up-down scanning directionsub1 If T is less than or equal to T, rescanning and calculating I on other scanning linessub And compared to a threshold T; if Isub1 &T, the scanning Line0 is a tangent Line of the upper edge of the circular effective area of the fisheye image; the same approach obtains the edge tangent in the left and right scan directions.
In the fourth step, the geometric correction model of equidistant projection is as follows:
the object point P forms an image point P on the ideal imaging plane and the imaging curved surface of the fisheye lens respectively0 And P1 Image height is r0 And r1 Establishing a geometric mapping model of a source space fisheye distortion image and a space fisheye correction image based on an equidistant projection geometric model r = f theta:
wherein: z is a fisheye lens imaging curved surface, h is the corrected target image height, and w is the corrected target imageImage width, O (w/2, h/2) is the coincidence point of geometric center and optical center, Q1 (a)1 ,b1 ) The original distortion point on the distorted image, R is the effective space image radius; according to r0 And r1 The Q (a, b) on the space fish eye correction image is obtained by inverting Q1.
Further, in order to obtain a better image effect, the method further comprises the following steps: performing space image recovery on the space fisheye correction image obtained in the step four through cubic interpolation;
step six: SIFT feature matching is carried out on the recovered space image;
step seven: and splicing the feature point matching results by an optical flow method to realize immersive and omnibearing panoramic imaging.
The numerical values of M and N in the second step are fixed values or variable values; the value of the pixel point brightness difference threshold value T in the third step is a fixed value or a variable value within the range of 20-50; r in step four0 And r1 The coefficient relation of (2) is a fixed value or a variable value; and sixthly, the number of the feature points matched with the SIFT features is a fixed value or a variable value.
By adopting the technical scheme, the invention has the following positive effects: (1) The method successfully makes up the defects of the space camera in the aspects of imaging range, resolution, large field of view and the like, utilizes the four space fisheye lenses to cover the 360-degree field angle, effectively saves load resources, can provide effective assistance and reference for astronauts for space control, has commercial space VR experience, and can be applied to the fields of space maintenance and space security in the future.
(2) The method improves the traditional line scanning method, effectively reduces the repeated scanning rate of the fisheye image, greatly reduces the calculated amount, improves the working efficiency of an imaging system, performs feature matching by using SIFT, and performs registration splicing by using an optical flow method, thereby achieving the omnibearing and immersive panoramic effect.
(3) According to the method, the geometric mapping model of the source space fisheye distortion image and the space fisheye correction image is established according to the equidistant projection geometric model, so that the length and width multiple adjustable space image distortion correction is realized, and the correction effect is effectively improved by 30%.
Drawings
In order that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a geometric topological relation diagram of the large-field fish-eye lens imaging.
Fig. 3 is a diagram for extracting an effective area of a space large-field fisheye image.
FIG. 4 is a geometrical mapping topological relation diagram of a source space fisheye distortion image and a space fisheye correction image.
Detailed Description
(example 1)
See fig. 2, at any point P in space, light passes through optical center O and connects OP hemispherical surface to P1 Over P1 Point making parallel lines parallel to the optical axis Z and intersecting with the image plane P2 Then P is2 I.e. the image formed by the spatial point P, due to the refraction of light, a certain degree of angular deviation is projected on the CCD image sensor array of the camera, which is the cause of image distortion.
The method flow of the embodiment is shown in fig. 1:
the method comprises the following steps: building a space virtual reality camera panoramic imaging system, covering a 360-degree visual angle by adopting four large-visual-field fisheye lenses on a satellite, and collecting a space large-visual-field fisheye image;
step two: extracting an effective circular area of the fisheye image with the large space field of view; k = MN-4R for effective imaging area2 Performing secondary linear scanning, wherein M and N are the size of a neighborhood window of the spatial fisheye image, and R is the radius of an effective area of the spatial large-view fisheye image; the effective imaging area is linearly scanned based on an improved linear scanning method:
wherein: ktop 、Kbottom 、Kleft 、Kright The times of scanning in the upper, lower, left and right directions in the whole scanning process are respectively, and X and Y are the times of scanning in the X direction and the Y direction; y istop 、Ybottom 、Xleft 、Xright The scanning times of the upper, lower, left and right directions of the effective imaging area are respectively obtained.
Step three: if the radius R1 calculated through the horizontal tangent point is different from the radius R2 calculated through the vertical tangent point in the effective circle area extracted in the step two, correcting the effective circle area of the fisheye image with the large space visual field;
the method for correcting the effective circular area of the fisheye image with the large space field comprises the following steps:
setting a pixel brightness difference threshold value T, calculating the difference Isub between the maximum brightness value Imax and the minimum brightness value Imin of the pixel on the scanning line, and searching coordinates A of four tangent lines at the edge of the space large-view-field fisheye image and intersection points of the four tangent linesleft (x1 ,y1 )、Aright (x2 ,y2 )、Btop (x3 ,y3 )、Bbottom (x4 ,y4 ) Specifically, the method for determining the coordinates of the four tangent lines at the edge of the fisheye image with the large space field and the intersection points of the four tangent lines comprises the following steps:
setting the brightness difference of pixels in a certain row/column of the space large-field fisheye image as Isub Respectively scanning the original fisheye image by an upper group of lines, a lower group of lines, a left group of lines and a right group of lines; the pixel is reduced by 1 when the scanning line in the up, down, left and right directions is scanned once so as to avoid repetition; the whole scanning process is performed with K in four directionstop 、Kbottom 、Kright 、Kleft Secondary scanning; when the scanning line is Isub &T, recording the position of the scanning Line as Line0, then calculating the brightness difference of the adjacent scanning lines, and continuously comparing with a threshold value; maximum brightness difference I if the lower adjacent scanning Line1 in the up-down scanning directionsub1 If T is less than or equal to T, rescanning and calculating I on other scanning linessub And compared with a threshold value T; if Isub1 &T, the scanning Line0 is a tangent Line of the upper edge of the circular effective area of the fisheye image; the same approach gets the edge tangent in the left and right scan direction.
The values of R1 and R2 were calculated according to the following formula:
I=0.59r+0.11g+0.3b
R=max{R1,R2}
wherein, the threshold T satisfies that T is more than or equal to 20 and less than or equal to 50, I is the brightness of each pixel point on the scanning line, r, g and b are red, green and blue three-channel color values, the range is 0-255, and the geometric center point O (x) of the circular effective area0 ,y0 );
When R1 ≠ R2, multiply by M-1 And (3) correcting the circular area of the fisheye image:
wherein (u, v) is the pixel coordinate of the center point of the fisheye image, and the alpha value is the ratio of the total number of the row pixels to the total number of the column pixels of the fisheye image;
step four: carrying out reverse mapping distortion correction based on an equidistant projection geometric correction model on the effective circular area of the fisheye image with the large space field obtained in the third step to obtain a space image which accords with human visual habits; the equidistant projection geometric correction model is an imaging relation of object points on an ideal imaging plane and the fisheye lens imaging curved surface. The equidistant projection geometric correction model is as follows:
the object point P forms an image point P on the ideal imaging plane and the fisheye lens imaging curved surface respectively0 And P1 Image height is r0 And r1 Establishing a geometric mapping model of a source space fisheye distortion image and a space fisheye correction image based on an equidistant projection geometric model r = f theta:
wherein: z is the fish-eye lens imaging curved surface, h is the corrected target image height, w is the corrected target image width, O (w/2, h/2) is the coincidence point of the geometric center and the optical center, and Q1 (a)1 ,b1 ) The distortion point is the original distortion point on the distorted image, and R is the effective space image radius; according to r0 And r1 The Q (a, b) on the space fish eye correction image is obtained by inverting Q1.
And a fifth step of: performing space image recovery on the space fisheye correction image obtained in the step four through cubic interpolation;
step six: SIFT feature matching is carried out on the recovered space image;
step seven: and splicing the feature point matching results by an optical flow method to realize immersive and omnibearing panoramic imaging.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present invention, and should not be construed as limiting the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.