Movatterモバイル変換


[0]ホーム

URL:


CN110782394A - Panoramic video rapid splicing method and system - Google Patents

Panoramic video rapid splicing method and system
Download PDF

Info

Publication number
CN110782394A
CN110782394ACN201911001401.8ACN201911001401ACN110782394ACN 110782394 ACN110782394 ACN 110782394ACN 201911001401 ACN201911001401 ACN 201911001401ACN 110782394 ACN110782394 ACN 110782394A
Authority
CN
China
Prior art keywords
camera
images
matrix
image
spliced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911001401.8A
Other languages
Chinese (zh)
Inventor
刘洋
杨成龙
孙兆友
陈爽爽
魏炳捷
李德祥
周磊
张兴佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese People's Liberation Army 63861
Original Assignee
Chinese People's Liberation Army 63861
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese People's Liberation Army 63861filedCriticalChinese People's Liberation Army 63861
Priority to CN201911001401.8ApriorityCriticalpatent/CN110782394A/en
Publication of CN110782394ApublicationCriticalpatent/CN110782394A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a panoramic video fast splicing method, which comprises the following steps: s1, performing off-line calibration on a camera set through a calibration object image, and acquiring and storing a projection transformation matrix of a camera; s2, acquiring images to be spliced acquired by a camera group aiming at a shooting area, performing off-line detection and registration after preprocessing, and processing the images through a projection transformation matrix of the camera to obtain and store an image mapping matrix of the images to be spliced; and S3, acquiring the current images to be spliced collected by the camera group in real time, projecting the images to be spliced into a standard coordinate system respectively, calling the image mapping matrix to determine the overlapping area of the current images to be spliced, and then carrying out splicing fusion processing to generate the panoramic spliced images. The invention also provides a system for realizing the method, so as to realize the nondestructive fusion of the splicing seams in the splicing process, the fusion algorithm has small calculated amount, high operation efficiency and better splicing effect, can well meet the requirements of panoramic monitoring in various civil and military fields, and is suitable for popularization.

Description

Panoramic video rapid splicing method and system
Technical Field
The invention relates to the technical field of video image synthesis, in particular to a method and a system for quickly splicing panoramic videos.
Background
The development and improvement of computer technology has led to the continuous progress of various video technologies. High-quality and high-definition images are gradually integrated into daily life of people, the video monitoring technology is no exception, and as the video monitoring technology is continuously enhanced, the requirements of people on video monitoring are higher and higher, on one hand, more and more information is required to be obtained, and on the other hand, the quality of the images is also required to be higher and higher. In the application scene of video monitoring, for example parking area, railway station, square, traffic crossing etc. place, because the scope of finding a view that single camera was shot is very little, managers often need to make a round trip to observe a plurality of control picture, and this can make the control personnel produce visual fatigue, is unfavorable for handling emergency. However, if a large range of scenes is to be photographed, the video camera is either a very expensive wide-angle camera or a fisheye lens camera at the cost of image distortion. Therefore, a large wide-angle video shooting technology which is cheap and has high quality picture quality is needed to meet the requirements of people. Therefore, a video stitching technique for synthesizing a wide-angle shot with a plurality of cameras is applied.
The video splicing technology is developed from an image splicing technology, wherein image splicing refers to a process of splicing two or more images with overlapped areas describing the same scene into a brand new image of a large scene through image registration and image fusion technologies. At present, the image stitching and synthesizing technology is widely applied to the fields of digital video processing, medical image analysis, remote sensing image processing and the like.
The traditional image splicing method generally extracts features such as ORB, Sift and the like from a picture, and then performs feature matching and fusion, for example, in the invention patent application with the publication number of CN103516995A, discloses a real-time panoramic video splicing method and device based on ORB features, wherein the method adopts an ORB feature extraction algorithm to perform feature point extraction on each path of image at the same moment and calculates ORB feature vectors of each feature point; splicing video frame scenes by adopting a nearest neighbor matching method, a RANSAC matching algorithm and the like; finally, the spliced video is output, a large amount of resources and time are consumed, and in some specific scenes such as battle test scenes and the like which mainly have high requirements on the panoramic splicing integrity, the splicing method is large in operation amount and poor in splicing effect, and cannot meet the requirements.
In view of the above, the present invention is particularly proposed.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a novel panoramic video fast splicing method which is small in operation rate, high in operation efficiency and good in splicing effect. The invention also provides a system for implementing the method.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a panoramic video fast splicing method comprises the following steps:
s1, performing off-line calibration on a camera set through a calibration object image, and acquiring and storing a projection transformation matrix of a camera;
s2, acquiring images to be spliced acquired by a camera group aiming at a shooting area, performing off-line detection and registration after preprocessing, and processing the images subjected to registration processing through a projection transformation matrix of the camera to obtain an image mapping matrix of the images to be spliced and storing the image mapping matrix;
and S3, acquiring the current images to be spliced collected by the camera group in real time, projecting the images to be spliced into a standard coordinate system respectively after preprocessing, calling the image mapping matrix to determine the overlapping area of the current images to be spliced, and then carrying out splicing fusion processing to generate the panoramic spliced images.
Further, in the panoramic video fast stitching method, the offline detection and registration in step s2 includes:
carrying out feature point detection and registration on the image obtained by preprocessing by using an SIFT feature registration algorithm;
and eliminating noise points in the images to be spliced by using a normalized cross-correlation algorithm, and purifying the geometry of the matching points.
Further, in the method for quickly splicing the panoramic video, step s3 includes:
respectively projecting the current images to be spliced to a standard coordinate system in a columnar projection mode, and then determining the overlapping area between the current images to be spliced according to the image mapping matrix;
performing exposure correction on two adjacent images to be spliced by adopting an HDR correction method, and fusing each image according to an overlapping area between the images and the global position of the image;
processing the splicing seams by adopting a weighted smoothing algorithm, and finally generating a panoramic spliced image; when the weighted smoothing algorithm is used for processing the splicing seams, the method comprises the following steps:
the gray value C of the pixel point in the image overlapping region is obtained by weighted average of the gray value A and the gray value B of the corresponding point in two adjacent images, namely C equals kA + (1-k) B, wherein k is an adjustable factor.
Further, in the method for quickly splicing the panoramic video, the step s1 includes:
s11, obtaining checkerboard images for camera calibration shot under a plurality of viewpoints;
s12, extracting corner points of the checkerboard calibration object image shot by the camera;
s13, calculating internal parameters, external parameters and distortion coefficients of the camera according to the acquired angular point information;
s14, optimizing external parameters of all cameras by using mixed errors;
and S15, calculating a projection transformation matrix corresponding to the camera according to the internal parameters and the external parameters.
Further, in the above method for quickly splicing panoramic video, s15 includes
Assuming that the average focal length of the current cameras is f,the rotation matrix is R, the translation vector is t, the image plane miscut parameter is s, and the focal point of the camera optical axis and the image plane is (u)0,v0);
In the camera teaching model, a point P ═ X, Y, Z,1 in a three-dimensional space]TPoint p ═ x, y,1 mapped onto two-dimensional plane of camera]TThe following mathematical model was followed:
p=MP,M=K[RI-Rt];
wherein, the matrix M is a camera matrix and is obtained by multiplying a camera internal reference matrix K and a camera external reference matrix [ RI-Rt ] matrix; the camera internal reference matrix K consists of several variables:
Figure BDA0002241437870000041
the camera group for video image splicing is set to meet the following conditions:
(u0,v0) (0,0), s ≈ 0, and the internal parameters of multiple cameras can be represented by the same internal parameter matrix:
Figure BDA0002241437870000042
t is approximately equal to 0 in the external reference matrix of the camera, and the external reference matrix is degenerated into a rotation matrix R;
then, the camera model is simplified to:
p=MP=KRP
let the camera matrix of two cameras be M respectively1=K1R1And M2=K2R2The same point P ═ X, Y, Z in the images taken by the two cameras]TAnd the homogeneous coordinate of the point P in the two images respectively represents P1=[x1,y1,1]TAnd p2=[x2,y2,1]TThen the following formula holds:
Figure BDA0002241437870000043
Figure BDA0002241437870000044
the transformation of the corresponding points in the camera is represented by a homography matrix H of 3 x 3; the homography matrix H between the two images is represented in the form:
h is a non-singular matrix, for point p1、p2Using non-homogeneous coordinates instead of homogeneous coordinates, let p1、p2Is p as a non-homogeneous coordinate1=[x1,y1,1]And p2=[x2,y2,1]Then, the relationship of the two-dimensional projective transformation is expressed as:
Figure BDA0002241437870000052
let A be [ a ═ a11,a12,a13,a21,a22,a23,a31,a32,a33]TWhen n pairs of points (n > 4) are used:
Figure BDA0002241437870000054
and calculating a projection transformation matrix H by using a least square method, and storing for later steps to call.
On the other hand, the invention also provides a panoramic video fast splicing system, which comprises a processor and a memory, wherein the memory is stored with a program, and when the program is run by the processor, the following steps are executed:
acquiring a plurality of calibration object images in different directions shot by a camera group for off-line calibration, and acquiring and storing a projection transformation matrix of the camera;
acquiring images to be spliced acquired by a camera group aiming at a shooting area, preprocessing the images to be spliced, performing off-line detection and registration, processing the images subjected to registration processing through a projection transformation matrix of the camera to obtain an image mapping matrix of the images to be spliced and storing the image mapping matrix;
and acquiring the current images to be spliced collected by the camera group in real time, projecting the images to be spliced into a standard coordinate system respectively after preprocessing, calling the image mapping matrix to determine the overlapping area of the current images to be spliced, and then carrying out splicing fusion processing to obtain the panoramic image.
Further, in the above panoramic video fast mosaic system, when the program is run, the steps of the acquired images to be mosaic are executed: when performing off-line detection and registration, include
Carrying out feature point detection and registration on the image obtained by preprocessing by using an SIFT feature registration algorithm;
and eliminating noise points in the images to be spliced by using a normalized cross-correlation algorithm, and purifying the geometry of the matching points.
Further, in the above panoramic video fast mosaic system, when the program is run, the following is executed for the preprocessed current image to be mosaiced: respectively projecting the images to a standard coordinate system, calling the image mapping matrix to determine the overlapping area of the current images to be spliced, and then carrying out splicing fusion processing to obtain panoramic images, wherein the process comprises the steps of
Respectively projecting the current images to be spliced to a standard coordinate system in a columnar projection mode, and then determining the overlapping area between the current images to be spliced according to the image mapping matrix;
performing exposure correction on two adjacent images to be spliced by adopting an HDR correction method, and fusing each image according to an overlapping area between the images and the global position of the image;
processing the splicing seams by adopting a weighted smoothing algorithm, and finally generating a panoramic spliced image; when the weighted smoothing algorithm is used for processing the splicing seams, the method comprises the following steps:
the gray value C of the pixel point in the image overlapping region is obtained by weighted average of the gray value A and the gray value B of the corresponding point in two adjacent images, namely C equals kA + (1-k) B, wherein k is an adjustable factor.
Further, in the above panoramic video fast mosaic system, when the program is executed, the following is executed: the method comprises the following steps of obtaining calibration object images in a plurality of different directions shot by a camera group for off-line calibration, and obtaining and storing a projection transformation matrix of the camera, wherein the method comprises the following steps:
acquiring checkerboard images for camera calibration shot under a plurality of viewpoints;
carrying out corner extraction on a checkerboard calibration object image shot by a camera;
calculating internal parameters, external parameters and distortion coefficients of the camera according to the acquired angular point information;
optimizing external parameters of all cameras by using the mixed errors;
and calculating a projection transformation matrix corresponding to the camera according to the internal parameter data and the external parameter data.
Further, in the above panoramic video fast mosaic system, when the program is executed, the following is executed: when the projective transformation matrix corresponding to the camera is calculated according to the internal parameter data and the external parameter data, the method comprises the following steps:
let the current camera average focal length be f, the rotation matrix be R, the translation vector be t, the image plane miscut parameter be s, and the camera optical axis and the focus of the image plane be (u)0,v0);
In the camera teaching model, a point P ═ X, Y, Z,1 in a three-dimensional space]TPoint p ═ x, y,1 mapped onto two-dimensional plane of camera]TThe following mathematical model was followed:
p=MP,M=K[RI-Rt];
wherein, the matrix M is a camera matrix and is obtained by multiplying a camera internal reference matrix K and a camera external reference matrix [ RI-Rt ] matrix; the camera internal reference matrix K consists of several variables:
Figure BDA0002241437870000071
the camera group for video image splicing is set to meet the following conditions:
(u0,v0) (0,0), s ≈ 0, and the internal parameters of multiple cameras can be represented by the same internal parameter matrix:
t is approximately equal to 0 in the external reference matrix of the camera, and the external reference matrix is degenerated into a rotation matrix R;
then, the camera model is simplified to:
p=MP=KRP
let the camera matrix of two cameras be M respectively1=K1R1And M2=K2R2The same point P ═ X, Y, Z in the images taken by the two cameras]TAnd the homogeneous coordinate of the point P in the two images respectively represents P1=[x1,y1,1]TAnd p2=[x2,y2,1]TThen the following formula holds:
Figure BDA0002241437870000083
the transformation of the corresponding points in the camera is represented by a homography matrix H of 3 x 3; the homography matrix H between the two images is represented in the form:
h is a non-singular matrix, for point p1、p2Replacing homogeneous coordinates with non-homogeneous coordinates, let p1, p2Is p as a non-homogeneous coordinate1=[x1,y1,1]And p2=[x2,y2,1]Then, the relationship of the two-dimensional projective transformation is expressed as:
Figure BDA0002241437870000086
let A be [ a ═ a11,a12,a13,a21,a22,a23,a31,a32,a33]TWhen n pairs of points (n > 4) are used:
Figure BDA0002241437870000091
and calculating a projection transformation matrix H by using a least square method, and storing for later steps to call.
Compared with the prior art, the invention has the beneficial effects that:
the method solves the problems of poor splicing effect and low shooting frame frequency of the area array scanning camera of the traditional high-definition image by combining offline calibration and real-time splicing; the gradual-in and gradual-out algorithm based on morphology realizes the lossless fusion of the splicing seams, has small calculation amount, high operation efficiency and better splicing effect, can well meet the requirements of panoramic monitoring in various civil and military fields, and is suitable for popularization.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
FIG. 1 is a schematic view; the panoramic video fast splicing method provides a calibration board used in the camera calibration step in a specific implementation;
FIG. 2 is a schematic view; the invention relates to a flow chart of steps executed when a program of a panoramic video quick splicing system is operated.
FIG. 3 is a diagram of: the method carries out weighting smoothing algorithm to process the schematic diagram of the splicing seam;
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only examples, and the protection scope of the present invention is not limited thereby.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
A panoramic video fast splicing method comprises the following steps:
s1, performing off-line calibration on a camera set through a calibration object image, and acquiring and storing a projection transformation matrix of a camera;
s2, acquiring images to be spliced acquired by a camera group aiming at a shooting area, performing off-line detection and registration after preprocessing, and processing the images subjected to registration processing through a projection transformation matrix of the camera to obtain an image mapping matrix of the images to be spliced and storing the image mapping matrix;
and S3, acquiring the current images to be spliced collected by the camera group in real time, projecting the images to be spliced into a standard coordinate system respectively after preprocessing, calling the image mapping matrix to determine the overlapping area of the current images to be spliced, and then carrying out splicing fusion processing.
In the method, the panoramic image splicing is realized by combining the off-line calibration and the real-time splicing. Adjusting the angle of the camera set to fix so that the acquired images have an overlapping area; calculating the mapping relation of the camera through off-line calibration, and storing the mapping relation; and then, the mapping matrix is loaded firstly when the real-time splicing of each video image is started, so that a large amount of early-stage work of feature extraction, matching and correction is saved.
In order to ensure the splicing effect, in the step s1, in the process of performing offline calibration on the camera set through the calibration object image, acquiring and storing the projection transformation matrix of the camera, a specific embodiment provided by the method specifically comprises the following steps:
s11, as shown in figure 1, a check series film calibration plate (the external dimension is 400mm multiplied by 400mm, the square side length is 30mm, and the precision is +/-0.02) is selected as the calibration object. Selecting a proper angle to fix the camera, shooting images of the checkerboard calibration objects, and shooting 15 photos in different directions by the camera through adjusting the direction of the calibration objects (experiments show that when the number of N is more than 10 and less than 20, N is the number of photos, and the calibration results are more accurate);
s12, extracting corners of the chessboard pattern calibration object images shot by the camera group, wherein the corners can be automatically extracted, and the users can be allowed to extract chessboard corners interactively;
s13, calculating internal parameters, external parameters and distortion coefficients of the camera according to the acquired angular point information; in this embodiment, rapid calculation is performed by an MATLAB calibration toolbox, which is a mature technique in the field and is not described again;
s14, optimizing external parameters of the camera by using the mixed errors, reducing the influence of objective factors on subsequent operation results, improving the calculation accuracy and further improving the splicing effect. The mixed error comprises a projection error and a correction error, the projection error refers to the projection position error of the same angular point in different views, the correction error is the difference of the same angular point in the y direction in left and right views after epipolar correction, and the mixed error is the weighted mixture of the two errors. Then
S141, projection error: for a pair of left and right cameras, defining the difference between the corner point detected in the right view and the position (projection position) in the right view transformed from the corresponding point detected in the left view;
s142, recording the projection error as a correction error: for a pair of left and right cameras, the difference in y direction (perpendicular to the epipolar line) in left and right views after epipolar line correction is defined for the same intersection point.
Let the projection error be Eproj and the correction error be Erect, then the mixed error Emix is:
Emix=αEproj+(1-α)Erect
α is an adjustable weight, and the Levenberg-Marquardt algorithm for solving the nonlinear least squares is used as an optimization method to carry out the optimization solution of the camera external parameters.
S15, when a projective transformation matrix corresponding to the camera is calculated according to the internal parameter and the external parameter data, calculating the projective transformation matrix according to the internal parameter and the external parameter data optimized in the steps, and storing the projective transformation matrix in a configuration file for use in the subsequent steps; the method specifically comprises the following steps:
s151, setting the average focal length of all current cameras as f, setting the rotation matrix as R, setting the translation vector as t, setting the image plane miscut parameter as s, and setting the image plane miscut parameter to represent the degree that the light sensing plane of the cameras is not vertical to the optical axis of the lens; the focal point of the camera optical axis and the image plane is (u)0,v0)。
In the camera teaching model, a point P ═ X, Y, Z,1 in a three-dimensional space]TPoint p ═ x, y,1 mapped onto two-dimensional plane of camera]TThe following mathematical model was followed:
p=MP,M=K[RI-Rt];
wherein, the matrix M is a camera matrix and is obtained by multiplying a camera internal reference matrix K and a camera external reference matrix [ RI-Rt ] matrix; the camera internal reference matrix K consists of several variables:
Figure BDA0002241437870000121
obtaining a camera matrix M by utilizing the internal parameters and the external parameters of the camera obtained in the step S1; and obtaining a corresponding projection transformation matrix by obtaining the camera matrix M.
In the method, a camera group used for video image splicing is set to meet the following conditions:
(1) the camera quality is high and the focal length between multiple cameras is similar, then (u)0,v0) (0,0), s ≈ 0, and the internal parameters of multiple cameras can be represented by the same internal parameter matrix:
Figure BDA0002241437870000122
(2) the depth of field of the scenery shot by the camera set is deep enough, so that the difference of positions among the camera sets can be ignored; then t ≈ 0 in the outlier matrix, so the outlier matrix may degenerate into a rotation matrix R.
After the above two conditions are satisfied, the camera model is simplified as follows:
p=MP=KRP
let the camera matrix of two cameras be M respectively1=K1R1And M2=K2R2The same point P ═ X, Y, Z in the images taken by the two cameras]TAnd the homogeneous coordinate of the point P in the two images respectively represents P1=[x1,y1,1]TAnd p2=[x2,y2,1]TThen the following formula holds:
Figure BDA0002241437870000123
Figure BDA0002241437870000124
thus, the transformation of the corresponding point in the camera can be represented by a homography matrix H of 3 × 3; the homography matrix H between two images can be expressed in the form:
h is a non-singular matrix, and thus for point p1、p2Using non-homogeneous coordinates instead of homogeneous coordinates, let p1、p2Is p as a non-homogeneous coordinate1=[x1,y1,1]And p2=[x2,y2,1]Then the relationship of the two-dimensional projective transformation can be expressed as:
Figure BDA0002241437870000132
let A be [ a ═ a11,a12,a13,a21,a22,a23,a31,a32,a33]TSince a pair of matching points can be determined2 independent linear equations, under the projection transformation model, 4 pairs of points are theoretically needed to solve the projection transformation matrix H. For n point pairs (n > 4) in practical engineering:
Figure BDA0002241437870000133
and finally, calculating a projection transformation matrix H by using a least square method, and storing the projection transformation matrix H in a local storage area.
S2, acquiring images to be spliced acquired by a camera group aiming at a shooting area, performing off-line detection and registration after preprocessing, and processing the images subjected to registration processing through a projection transformation matrix of the camera to obtain an image mapping matrix of the images to be spliced and store the image mapping matrix:
the fixed camera group shoots images aiming at respective shooting areas, and the images are a group of images to be spliced, such as 6 images and the like, and after pretreatment, offline detection, registration and processing are carried out to obtain an image mapping matrix of the images to be spliced; the registration processing enables points corresponding to the same position in space in two adjacent images to be in one-to-one correspondence, and the overlapping area between the adjacent images in the image to be processed is determined through the camera projection transformation matrix, so that the subsequent splicing fusion is facilitated; specifically, comprise
S21, image drying: in the step, a group of original images to be spliced, which are shot by a camera set aiming at respective shooting areas, are obtained firstly, the original images are converted into gray-scale images, so that the contrast is enhanced, the subsequent characteristic points are conveniently extracted, and then the original images are subjected to drying treatment by using wiener filtering and converted back into color images;
s22, image distortion correction: and (3) establishing an image distortion correction model, and then carrying out geometric distortion removal treatment on each graph of the image to be spliced by using the camera internal reference data and the camera external reference data obtained in the step S1 through an MATLAB software tool for correction.
After the preprocessed pictures to be spliced are obtained, the characteristic point detection and the registration are carried out through the step S23, and then the matching point set is purified, wherein the method comprises the following steps:
s221, carrying out feature point detection and registration on the image obtained after the processing in the step S21 and/or the step S22 by using an SIFT feature registration algorithm;
s222, eliminating noise points in the images to be spliced by using a normalized cross-correlation (NCC algorithm) algorithm, and purifying the matching points.
The method adopts an off-line calibration method, and does not depend on the time consumed in the global registration stage of the multi-view camera, so the SIFT algorithm with the best effect is selected for the image registration method based on the feature point information in the embodiment to obtain the matching points, but the matching points obtained by the traditional SIFT algorithm have the condition of mismatching, therefore, further, the matching point set is purified by the normalized cross-correlation (NCC) technology of the gray information, the matching point pairs of the SIFT are purified by utilizing the gray information, for a correct SIFT matching point pair, the gray values on the templates with the same size on the image are very close, and therefore, the normalized cross-correlation can be used for checking whether the matching points are the mismatching points.
And (4) obtaining a mapping matrix of the image according to the obtained camera projection transformation matrix after the image to be processed is subjected to feature point extraction and matching. After the mapping matrix of the image is stored, the mapping matrix is loaded at the beginning of real-time online splicing of the video each time, so that a large amount of early-stage work such as feature extraction, matching, correction and the like is omitted, the image splicing and fusion steps are directly carried out, and the image splicing processing efficiency is improved.
And S3, acquiring the current images to be spliced collected by the camera group in real time, projecting the images to be spliced into a standard coordinate system respectively after preprocessing, calling the image mapping matrix to determine the overlapping area of the current images to be spliced, and then carrying out splicing fusion processing.
Because the image sequence is two-dimensional projection of an entity scene under different coordinate systems, and the direct splicing of the shot images cannot meet the visual consistency, the current images to be spliced need to be respectively projected under a standard coordinate system, and then the image splicing is carried out. The transformation of the cylindrical coordinates has the advantages of simplicity and quickness in operation, the projection images obtained by performing cylindrical projection on the original image at different positions are the same, and the spliced cylindrical panoramic image has a better visual effect and can describe scene information in detail at 360 degrees in the horizontal direction, so that the current image to be spliced is processed by the cylindrical projection.
The cylindrical projection formula is as follows:
Figure BDA0002241437870000151
wherein, x 'and y' are the image coordinates after the cylindrical projection, x and y are the original coordinates of the image, W and H are the width and height of the original image, and f is the focal length of the camera. When projection is carried out, a blank image with the same size as the original image of the current image to be spliced is established for storing the projected image, each point of the blank image is used for obtaining a corresponding point of the blank image on the original image by using a back projection formula, and therefore the brightness value of a pixel point on the blank image is determined, and the image coordinate of the columnar projected image of the current image to be spliced can be obtained.
S31, the images to be spliced are respectively projected to a standard coordinate system by adopting the mode, and then the overlapping area between the images is determined according to the mapping matrix of the images.
And S32, carrying out exposure correction on two adjacent images to be spliced by adopting an HDR correction method, thereby reducing the brightness difference between the images. Then, fusing each image according to the overlapping area between the images and the global position of the image, wherein the step of fusing the images is mature technology in the field and is not described again;
and S33, processing the splicing seams by adopting a weighted smoothing algorithm, and finally generating a panoramic splicing image.
In order to obtain a larger exposure range, the array panoramic camera is set to an automatic exposure mode. However, this has a serious consequence that adjacent cameras may have a particularly large difference in exposure (up to 3 times) due to their different orientations. This not only causes the concatenation difficulty, still can bring very big uncomfortable sense to the 3D effect because left and right eyes exposure difference is too big. Therefore, before image fusion, the method adopts the HDR correction method to carry out exposure correction on two adjacent images to be spliced, thereby reducing the brightness difference between the images.
Secondly, in order to avoid obvious brightness change at two ends of the spliced image suture line, the suture line needs to be processed in the fusion process; the processing method which can be adopted comprises color interpolation, multi-resolution spline technology and the like, and the invention adopts a faster and simpler weighting smoothing algorithm to process the problem of the splicing seam: the gray value C of the pixel point in the image overlapping region is obtained by weighted averaging the gray values a and B of the corresponding points in the two images, that is, C ═ kA + (1-k) B, where k is an adjustable factor, as shown in fig. 3.
The method can utilize multiple cameras to locally shoot the target image, overcomes the difficulty of shooting large-amplitude images by using a single camera, and realizes the scanning-level panoramic stitching synthesis of multiple planar images with overlapping areas. Meanwhile, the problems of poor splicing effect of the traditional high-definition images and low shooting frame frequency of an area-array scanning camera are solved by a method combining offline calibration and real-time splicing; the invention realizes the lossless fusion of splicing seams based on the morphological gradual-in gradual-out algorithm (weighted smoothing algorithm), has small calculation amount of the fusion algorithm, high operation efficiency and better splicing effect, and can well meet the requirements of panoramic monitoring in various civil and military fields.
On the other hand, the invention also provides a system for quickly splicing the panoramic video, which is used for implementing the method. The system comprises a processor and a memory, wherein the memory stores a program, and when the program is executed by the processor, as shown in fig. 2, the following steps are executed:
acquiring a plurality of calibration object images in different directions shot by a camera group for off-line calibration, and acquiring and storing a projection transformation matrix of the camera;
acquiring images to be spliced acquired by a camera group aiming at a shooting area, preprocessing the images to be spliced, performing off-line detection and registration, processing the images subjected to registration processing through a projection transformation matrix of the camera to obtain an image mapping matrix of the images to be spliced and storing the image mapping matrix;
and acquiring the current images to be spliced collected by the camera group in real time, projecting the images to be spliced into a standard coordinate system respectively after preprocessing, calling the image mapping matrix to determine the overlapping area of the current images to be spliced, and then carrying out splicing fusion processing to acquire panoramic images.
In a specific embodiment of the system of the present invention, when performing offline calibration on a plurality of calibration object images in different directions captured by the capturing camera group, and obtaining and storing a projection transformation matrix of the camera, the method includes:
the checkerboard images for camera calibration shot under multiple viewpoints are obtained and used for calculating an internal parameter matrix K, external parameters (camera rotation R and camera translation T), distortion coefficients and the like of a camera.
Wherein the calibration object adopts a check series film calibration board (the external dimension is 400mm multiplied by 400 mm: the square side length is 30mm, the precision is +/-0.02), then a camera set is fixed, the image of the check calibration object is shot, and 15 photos in different directions are shot by the camera through adjusting the direction of the calibration object (experiments show that when N is more than 10 and less than 20, N is the number of photos, the calibration result is more accurate);
then, extracting the corner points of the checkerboard calibration object image shot by the camera, wherein the extraction can be automatic, and the extraction of the checkerboard corner points can be carried out by a user in an interactive mode;
calculating internal parameters, external parameters and distortion coefficients of the camera according to the acquired angular point information; in the embodiment, rapid calculation is performed through an MATLAB calibration tool box;
and the external parameters of all cameras are optimized by using the mixed errors, so that the influence of objective factors on subsequent operation results is reduced, the calculation accuracy is improved, and the splicing effect is further improved. The mixed error comprises a projection error and a correction error, the projection error refers to the projection position error of the same angular point in different views, the correction error is the difference of the same angular point in the y direction in left and right views after epipolar correction, and the mixed error is the weighted mixture of the two errors. Then
Let the projection error be Eproj and the correction error be Erect, then the mixed error Emix is:
Emix=αEproj+(1-α)Erect
α is an adjustable weight, and the Levenberg-Marquardt algorithm for solving the nonlinear least squares is used as an optimization method to carry out the optimization solution of the camera external parameters.
The program further executes: calculating a projection transformation matrix corresponding to the camera according to the internal parameter data and the external parameter data; calculating a projective transformation matrix from the internal and external parameter data optimized in the steps, and storing the projective transformation matrix in a configuration file for use in the subsequent steps; the method specifically comprises the following steps:
setting the current focal length of the camera as f, the rotation matrix as R, the translation vector as t, the image plane miscut parameter as s, and the image plane miscut parameter representing the degree of the camera light sensing plane not perpendicular to the lens optical axis; the focal point of the camera optical axis and the image plane is (u)0,v0)。
In the camera teaching model, a point P ═ X, Y, Z,1 in a three-dimensional space]TPoint p ═ x, y,1 mapped onto two-dimensional plane of camera]TThe following mathematical model was followed:
p=MP,M=K[RI-Rt];
wherein, the matrix M is a camera matrix and is obtained by multiplying a camera internal reference matrix K and a camera external reference matrix [ RI-Rt ] matrix; the camera internal reference matrix K consists of several variables:
Figure BDA0002241437870000181
the camera matrix M can be solved by using the obtained internal parameters and external parameters of the camera; and obtaining a corresponding projection transformation matrix by obtaining the camera matrix M.
In the process of solving the projection transformation matrix of the camera, the camera group for video image splicing is set to meet the following conditions:
(1) the camera quality is high and the focal length between multiple cameras is similar, then (u)0,v0) (0,0), s ≈ 0, and the internal parameters of multiple cameras can be represented by the same internal parameter matrix:
Figure BDA0002241437870000182
(2) the depth of field of the scenery shot by the camera set is deep enough, so that the difference of positions among the camera sets can be ignored; then t ≈ 0 in the outlier matrix, so the outlier matrix may degenerate into a rotation matrix R.
After the above two conditions are satisfied, the camera model is simplified as follows:
p=MP=KRP
let the camera matrix of two cameras be M respectively1=K1R1And M2=K2R2The same point P ═ X, Y, Z in the images taken by the two cameras]TAnd the homogeneous coordinate of the point P in the two images respectively represents P1=[x1,y1,1]TAnd p2=[x2,y2,1]TThen the following formula holds:
Figure BDA0002241437870000183
Figure BDA0002241437870000184
thus, the transformation of the corresponding point in the camera can be represented by a homography matrix H of 3 × 3; the homography matrix H between two images can be expressed in the form:
Figure BDA0002241437870000191
h is a non-singular matrix, and thus for point p1、p2Using non-homogeneous coordinates instead of homogeneous coordinates, let p1、p2Is p as a non-homogeneous coordinate1=[x1,y1,1]And p2=[x2,y2,1]Then the relationship of the two-dimensional projective transformation can be expressed as:
Figure BDA0002241437870000192
Figure BDA0002241437870000193
let A be [ a ═ a11,a12,a13,a21,a22,a23,a31,a32,a33]TBecause 2 independent linear equations can be determined by one pair of matching points, 4 pairs of points are theoretically needed to solve the projective transformation matrix H under the projective transformation model. For n point pairs (n > 4) in practical engineering:
and finally, calculating a projection transformation matrix H by using a least square method, and storing the projection transformation matrix H in a local storage area.
When the program executes the image to be spliced acquired by the camera group aiming at the shooting area, the off-line registration is carried out after the preprocessing, the image after the registration processing is processed by the projection transformation matrix of the camera, and the image mapping matrix of the image to be spliced is obtained and stored, the method specifically comprises the following steps of
Image drying: firstly, acquiring a group of original images to be spliced, which are shot by a camera set aiming at respective shooting areas, converting the original images into gray-scale images so as to enhance contrast and facilitate the extraction of subsequent characteristic points, and then performing drying treatment on the original images by using wiener filtering and converting the original images back into color images;
and (3) image distortion correction: and establishing an image distortion correction model, and then performing geometric distortion removal treatment on each graph of the image to be spliced by using the camera internal reference data and the camera external reference data obtained in the previous steps through an MATLAB software tool to perform correction.
After the preprocessed pictures to be spliced are obtained, feature point detection and registration are carried out, and then a matching point set is stored, wherein the method comprises the following steps:
carrying out feature point detection and registration on the preprocessed image by using an SIFT feature registration algorithm;
and (3) eliminating noise points in the images to be spliced by using a normalized cross-correlation (NCC) algorithm so as to purify the matching quasi points.
In the embodiment, the SIFT algorithm with the best effect is selected to obtain the matching points, but the matching points obtained by the traditional SIFT algorithm have the condition of mismatching, so that the invention further utilizes the Normalized Cross Correlation (NCC) technology of gray information to purify the matching point set; the matching point set is purified through a Normalized Cross Correlation (NCC) technology of gray information, the matching point pairs of SIFT are purified through the gray information, for a correct SIFT matching point pair, the gray values on the templates with the same size on the image are very close, and therefore the normalized cross correlation can be used for checking whether the matching points are mismatching points.
And the image to be processed after the characteristic point extraction and matching can obtain a mapping matrix of the image through the previously obtained camera projection transformation matrix. The method is used for loading the mapping matrix when the online video splicing starts each time, so that a large amount of early-stage work such as feature extraction, matching, correction and the like is omitted, and the steps of image splicing and fusion are directly carried out.
Executing the program to acquire the current images to be spliced collected by the camera set in real time, projecting the images to be spliced into a standard coordinate system respectively after preprocessing, calling the image mapping matrix to determine the overlapping area of the current images to be spliced, and then carrying out splicing fusion processing to obtain a panoramic image:
the method comprises the steps of preprocessing the current images to be spliced, wherein the preprocessing mode is the same as that before offline registration, and then respectively projecting the images to be spliced to a standard coordinate system.
The cylindrical projection formula is as follows:
wherein, x 'and y' are the image coordinates after the cylindrical projection, x and y are the original coordinates of the image, W and H are the width and height of the original image, and f is the focal length of the camera. When projection is carried out, a blank image with the same size as the original image of the current image to be spliced is established for storing the projected image, each point of the blank image is used for obtaining a corresponding point of the blank image on the original image by using a back projection formula, and therefore the brightness value of a pixel point on the blank image is determined, and the image coordinate of the columnar projected image of the current image to be spliced can be obtained.
The images to be spliced are respectively projected to a standard coordinate system by adopting the mode, and then the overlapping area between the images is determined according to the mapping matrix of the images.
And then, carrying out exposure correction on two adjacent images to be spliced by adopting an HDR correction method so as to reduce the brightness difference between the images. Then, all the images are fused according to the overlapping area between the images and the global position of the images, the problem of splicing seams is processed by adopting a weighted smoothing algorithm, and finally, a panoramic spliced image is generated.
When the suture is processed, a quick and simple weighted smoothing algorithm is adopted: the gray value C of the pixel point in the image overlapping region is obtained by weighted average of the gray values a and B of the corresponding points in the two images, that is, C ═ kA + (1-k) B, where k is an adjustable factor.
The system is used for implementing the method, has high operation efficiency and good splicing effect, and can well meet the requirements of panoramic monitoring in various civil and military fields in the process of splicing the panoramic video images; the step principle executed when the program is operated in the system of the invention corresponds to the method of the invention.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (10)

1. A panoramic video fast splicing method is characterized by comprising the following steps:
s1, performing off-line calibration on a camera set through a calibration object image, and acquiring and storing a projection transformation matrix of a camera;
s2, acquiring images to be spliced acquired by a camera group aiming at a shooting area, performing off-line detection and registration after preprocessing, and processing the images subjected to registration processing through a projection transformation matrix of the camera to obtain an image mapping matrix of the images to be spliced and storing the image mapping matrix;
and S3, acquiring the current images to be spliced collected by the camera group in real time, projecting the images to be spliced into a standard coordinate system respectively after preprocessing, calling the image mapping matrix to determine the overlapping area of the current images to be spliced, and then carrying out splicing fusion processing to generate the panoramic spliced images.
2. The panoramic video fast stitching method according to claim 1, wherein the offline detection and registration in step s2 comprises:
carrying out feature point detection and registration on the image obtained by preprocessing by using an SIFT feature registration algorithm;
and eliminating noise points in the images to be spliced by using a normalized cross-correlation algorithm, and purifying the geometry of the matching points.
3. The panoramic video fast splicing method according to claim 2, wherein the step S3 comprises:
respectively projecting the current images to be spliced to a standard coordinate system in a columnar projection mode, and then determining the overlapping area between the current images to be spliced according to the image mapping matrix;
performing exposure correction on two adjacent images to be spliced by adopting an HDR correction method, and fusing each image according to an overlapping area between the images and the global position of the image;
processing the splicing seams by adopting a weighted smoothing algorithm, and finally generating a panoramic spliced image; when the weighted smoothing algorithm is used for processing the splicing seams, the method comprises the following steps:
the gray value C of the pixel point in the image overlapping region is obtained by weighted average of the gray value A and the gray value B of the corresponding point in two adjacent images, namely C equals kA + (1-k) B, wherein k is an adjustable factor.
4. The panoramic video fast splicing method according to any one of claims 1 to 3, wherein the step S1 comprises:
s11, obtaining checkerboard images for camera calibration shot under a plurality of viewpoints;
s12, extracting corner points of the checkerboard calibration object image shot by the camera;
s13, calculating internal parameters, external parameters and distortion coefficients of the camera according to the acquired angular point information;
s14, optimizing external parameters of all cameras by using mixed errors;
and S15, calculating a projection transformation matrix corresponding to the camera according to the internal parameters and the external parameters.
5. The panoramic video fast stitching method according to claim 4, wherein S15. comprises setting the average focal length of the current camera to be f, the rotation matrix to be R, the translation vector to be t, the image plane miscut parameter to be s, and the focal point of the camera optical axis and the image plane to be (u)0,v0);
In the camera teaching model, a point P ═ X, Y, Z,1 in a three-dimensional space]TPoint p ═ x, y,1 mapped onto two-dimensional plane of camera]TThe following mathematical model was followed:
p=MP,M=K[RI-Rt];
wherein, the matrix M is a camera matrix and is obtained by multiplying a camera internal reference matrix K and a camera external reference matrix [ RI-Rt ] matrix; the camera internal reference matrix K consists of several variables:
Figure FDA0002241437860000021
the camera group for video image splicing is set to meet the following conditions:
(u0,v0) (0,0), s ≈ 0, and the internal parameters of multiple cameras can be represented by the same internal parameter matrix:
Figure FDA0002241437860000031
t is approximately equal to 0 in the external reference matrix of the camera, and the external reference matrix is degenerated into a rotation matrix R;
then, the camera model is simplified to:
p=MP=KRP
let the camera matrix of two cameras be M respectively1=K1R1And M2=K2R2The same point P ═ X, Y, Z in the images taken by the two cameras]TAnd the homogeneous coordinate of the point P in the two images respectively represents P1=[x1,y1,1]TAnd p2=[x2,y2,1]TThen the following formula holds:
Figure FDA0002241437860000032
Figure FDA0002241437860000033
the transformation of the corresponding points in the camera is represented by a homography matrix H of 3 x 3; the homography matrix H between the two images is represented in the form:
Figure FDA0002241437860000034
h is a non-singular matrix, for point p1、p2Using non-homogeneous coordinates instead of homogeneous coordinates, let p1、p2Is p as a non-homogeneous coordinate1=[x1,y1,1]And p2=[x2,y2,1]Then, the relationship of the two-dimensional projective transformation is expressed as:
Figure FDA0002241437860000035
Figure FDA0002241437860000041
let A be [ a ═ a11,a12,a13,a21,a22,a23,a31,a32,a33]TWhen n pairs of points (n > 4) are used:
Figure FDA0002241437860000042
and calculating a projection transformation matrix H by using a least square method, and storing for later steps to call.
6. A panoramic video fast splicing system is characterized by comprising a processor and a memory, wherein the memory stores a program, and when the program is executed by the processor, the following steps are executed:
acquiring a plurality of calibration object images in different directions shot by a camera group for off-line calibration, and acquiring and storing a projection transformation matrix of the camera;
acquiring images to be spliced acquired by a camera group aiming at a shooting area, preprocessing the images to be spliced, performing off-line detection and registration, processing the images subjected to registration processing through a projection transformation matrix of the camera to obtain an image mapping matrix of the images to be spliced and storing the image mapping matrix;
and acquiring the current images to be spliced collected by the camera group in real time, projecting the images to be spliced into a standard coordinate system respectively after preprocessing, calling the image mapping matrix to determine the overlapping area of the current images to be spliced, and then carrying out splicing fusion processing to obtain the panoramic image.
7. The panoramic video rapid stitching system according to claim 6, wherein when the program is executed, the steps of: when performing off-line detection and registration, include
Carrying out feature point detection and registration on the image obtained by preprocessing by using an SIFT feature registration algorithm;
and eliminating noise points in the images to be spliced by using a normalized cross-correlation algorithm, and purifying the geometry of the matching points.
8. The panoramic video rapid stitching system according to claim 7, wherein when the program is executed, for the preprocessed current image to be stitched, the following steps are performed: respectively projecting the images to a standard coordinate system, calling the image mapping matrix to determine the overlapping area of the current images to be spliced, and then carrying out splicing fusion processing to obtain panoramic images, wherein the process comprises the steps of
Respectively projecting the current images to be spliced to a standard coordinate system in a columnar projection mode, and then determining the overlapping area between the current images to be spliced according to the image mapping matrix;
performing exposure correction on two adjacent images to be spliced by adopting an HDR correction method, and fusing each image according to an overlapping area between the images and the global position of the image;
processing the splicing seams by adopting a weighted smoothing algorithm, and finally generating a panoramic spliced image; when the weighted smoothing algorithm is used for processing the splicing seams, the method comprises the following steps:
the gray value C of the pixel point in the image overlapping region is obtained by weighted average of the gray value A and the gray value B of the corresponding point in two adjacent images, namely C equals kA + (1-k) B, wherein k is an adjustable factor.
9. The scene video rapid mosaic system according to any one of claims 6-8, wherein said program, when executed, performs said: the method comprises the following steps of obtaining calibration object images in a plurality of different directions shot by a camera group for off-line calibration, and obtaining and storing a projection transformation matrix of the camera, wherein the method comprises the following steps:
acquiring checkerboard images for camera calibration shot under a plurality of viewpoints;
carrying out corner extraction on a checkerboard calibration object image shot by a camera;
calculating internal parameters, external parameters and distortion coefficients of the camera according to the acquired angular point information;
optimizing external parameters of all cameras by using the mixed errors;
and calculating a projection transformation matrix corresponding to the camera according to the internal parameter data and the external parameter data.
10. The panoramic video quick mosaic system of claim 9, wherein said program, when executed, performs said: when the projective transformation matrix corresponding to the camera is calculated according to the internal parameter data and the external parameter data, the method comprises the following steps:
let the current camera average focal length be f, the rotation matrix be R, the translation vector be t, the image plane miscut parameter be s, and the camera optical axis and the focus of the image plane be (u)0,v0);
In the camera teaching model, a point P ═ X, Y, Z,1 in a three-dimensional space]TPoint p ═ x, y,1 mapped onto two-dimensional plane of camera]TThe following mathematical model was followed:
p=MP,M=K[RI-Rt];
wherein, the matrix M is a camera matrix and is obtained by multiplying a camera internal reference matrix K and a camera external reference matrix [ RI-Rt ] matrix; the camera internal reference matrix K consists of several variables:
Figure FDA0002241437860000061
the camera group for video image splicing is set to meet the following conditions:
(u0,v0) (0,0), s ≈ 0, and the internal parameters of multiple cameras can be represented by the same internal parameter matrix:
t is approximately equal to 0 in the external reference matrix of the camera, and the external reference matrix is degenerated into a rotation matrix R;
then, the camera model is simplified to:
p=MP=KRP
let the camera matrix of two cameras be M respectively1=K1R1And M2=K2R2Through these two phasesThe same point P ═ X, Y, Z in the captured image]TAnd the homogeneous coordinate of the point P in the two images respectively represents P1=[x1,y1,1]TAnd p2=[x2,y2,1]TThen the following formula holds:
Figure FDA0002241437860000071
Figure FDA0002241437860000072
the transformation of the corresponding points in the camera is represented by a homography matrix H of 3 x 3; the homography matrix H between the two images is represented in the form:
Figure FDA0002241437860000073
h is a non-singular matrix, for point p1、p2Replacing homogeneous coordinates with non-homogeneous coordinates, let p1, p2Is p as a non-homogeneous coordinate1=[x1,y1,1]And p2=[x2,y2,1]Then, the relationship of the two-dimensional projective transformation is expressed as:
Figure FDA0002241437860000075
let A be [ a ═ a11,a12,a13,a21,a22,a23,a31,a32,a33]TWhen n pairs of points (n > 4) are used:
Figure FDA0002241437860000081
and calculating a projection transformation matrix H by using a least square method, and storing for later steps to call.
CN201911001401.8A2019-10-212019-10-21Panoramic video rapid splicing method and systemPendingCN110782394A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201911001401.8ACN110782394A (en)2019-10-212019-10-21Panoramic video rapid splicing method and system

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201911001401.8ACN110782394A (en)2019-10-212019-10-21Panoramic video rapid splicing method and system

Publications (1)

Publication NumberPublication Date
CN110782394Atrue CN110782394A (en)2020-02-11

Family

ID=69386114

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201911001401.8APendingCN110782394A (en)2019-10-212019-10-21Panoramic video rapid splicing method and system

Country Status (1)

CountryLink
CN (1)CN110782394A (en)

Cited By (62)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111369495A (en)*2020-02-172020-07-03珀乐(北京)信息科技有限公司Video-based panoramic image change detection method
CN111627008A (en)*2020-05-272020-09-04深圳市华汉伟业科技有限公司Object surface detection method and system based on image fusion and storage medium
CN111798374A (en)*2020-06-242020-10-20浙江大华技术股份有限公司Image splicing method, device, equipment and medium
CN111815517A (en)*2020-07-092020-10-23苏州万店掌网络科技有限公司Self-adaptive panoramic stitching method based on snapshot pictures of dome camera
CN111899174A (en)*2020-07-292020-11-06北京天睿空间科技股份有限公司Single-camera rotation splicing method based on deep learning
CN111915482A (en)*2020-06-242020-11-10福建(泉州)哈工大工程技术研究院Image splicing method suitable for fixed scene
CN112001844A (en)*2020-08-182020-11-27南京工程学院Acquisition device for acquiring high-definition images of rice planthoppers and rapid splicing method
CN112034198A (en)*2020-07-032020-12-04朱建国High-shooting-speed bullet continuous-firing initial speed measuring method
CN112085659A (en)*2020-09-112020-12-15中德(珠海)人工智能研究院有限公司 A panorama stitching fusion method, system and storage medium based on spherical screen camera
CN112102168A (en)*2020-09-032020-12-18成都中科合迅科技有限公司Image splicing method and system based on multiple threads
CN112188163A (en)*2020-09-292021-01-05厦门汇利伟业科技有限公司Method and system for automatic de-duplication splicing of real-time video images
CN112308777A (en)*2020-10-162021-02-02易思维(杭州)科技有限公司Rapid image splicing method for plane and plane-like parts
CN112308986A (en)*2020-11-032021-02-02豪威科技(武汉)有限公司Vehicle-mounted image splicing method, system and device
CN112381710A (en)*2020-10-132021-02-19中铭谷智能机器人(广东)有限公司2D vision algorithm system for automobile plate spraying
CN112419383A (en)*2020-10-302021-02-26中山大学Depth map generation method and device and storage medium
CN112437327A (en)*2020-11-232021-03-02北京瞰瞰科技有限公司Real-time panoramic live broadcast splicing method and system
CN112449093A (en)*2020-11-052021-03-05北京德火科技有限责任公司Three-dimensional panoramic video fusion monitoring platform
CN112581369A (en)*2020-12-242021-03-30中国银联股份有限公司Image splicing method and device
CN112712037A (en)*2020-12-312021-04-27苏州清研微视电子科技有限公司Vehicle-mounted environment sensing method and system based on panoramic image and target detection
CN112862674A (en)*2020-12-072021-05-28西安电子科技大学Automatic Stitch algorithm-based multi-image automatic splicing method and system
CN112954234A (en)*2021-01-282021-06-11天翼物联科技有限公司Method, system, device and medium for multi-video fusion
CN113055613A (en)*2021-03-182021-06-29上海云话科技有限公司Panoramic video stitching method and device based on mine scene
CN113052119A (en)*2021-04-072021-06-29兴体(广州)智能科技有限公司Ball motion tracking camera shooting method and system
CN113063704A (en)*2020-12-042021-07-02泰州市朗嘉馨网络科技有限公司Particle fullness analysis platform and method
CN113222878A (en)*2021-06-042021-08-06杭州海康威视数字技术股份有限公司Image splicing method
CN113221665A (en)*2021-04-192021-08-06东南大学Video fusion algorithm based on dynamic optimal suture line and improved gradual-in and gradual-out method
CN113674145A (en)*2020-05-152021-11-19北京大视景科技有限公司 Spherical stitching and real-time alignment of PTZ moving images
CN113689339A (en)*2021-09-082021-11-23北京经纬恒润科技股份有限公司Image splicing method and device
CN113781373A (en)*2021-08-262021-12-10云从科技集团股份有限公司Image fusion method, device and computer storage medium
CN113810665A (en)*2021-09-172021-12-17北京百度网讯科技有限公司Video processing method, device, equipment, storage medium and product
CN114007014A (en)*2021-10-292022-02-01北京环境特性研究所Method and device for generating panoramic image, electronic equipment and storage medium
CN114022562A (en)*2021-10-252022-02-08同济大学 A panoramic video stitching method and device for maintaining pedestrian integrity
CN114092706A (en)*2021-11-112022-02-25浩云科技股份有限公司 A sports panoramic football video recording method, system, storage medium and terminal device
CN114331835A (en)*2021-12-152022-04-12中国飞行试验研究院Panoramic image splicing method and device based on optimal mapping matrix
CN114418862A (en)*2022-03-312022-04-29苏州挚途科技有限公司Method, device and system for splicing side images
CN114559131A (en)*2020-11-272022-05-31北京颖捷科技有限公司Welding control method and device and upper computer
CN114612311A (en)*2020-12-072022-06-10中国科学院长春光学精密机械与物理研究所 Ring 2π space imaging and seamless stitching method
CN114972023A (en)*2022-04-212022-08-30合众新能源汽车有限公司 Image mosaic processing method, device, equipment and computer storage medium
CN114998105A (en)*2022-06-022022-09-02成都弓网科技有限责任公司Monitoring method and system based on multi-camera pantograph video image splicing
CN115050004A (en)*2022-06-132022-09-13江苏范特科技有限公司Pedestrian mirror-crossing positioning method, system and medium based on top view camera
CN115082305A (en)*2021-03-152022-09-20爱思开海力士有限公司Apparatus and method for generating panoramic image
CN115086629A (en)*2022-06-102022-09-20谭健Sphere multi-lens real-time panoramic three-dimensional imaging system
CN115222591A (en)*2022-06-222022-10-21中国科学院苏州生物医学工程技术研究所Rapid multi-eye fisheye image and video stitching method irrelevant to camera equipment parameters
CN115222596A (en)*2022-07-152022-10-21山东中博智云计算机科技有限公司 Image stitching method and device for jointly shooting large-size displays with line scan cameras
CN115439547A (en)*2021-08-182022-12-06北京车和家信息技术有限公司 Camera calibration method, device, image stitching method, camera and vehicle
CN115496722A (en)*2022-09-222022-12-20广西成电智能制造产业技术有限责任公司On-line splicing and improving method for splicing quality of vehicle-mounted panoramic image
CN116016816A (en)*2022-12-132023-04-25之江实验室Embedded GPU zero-copy panoramic image stitching method and system for improving L-ORB algorithm
WO2023104115A1 (en)*2021-12-102023-06-15华为技术有限公司Panoramic video acquiring method, apparatus and system, device, and storage medium
CN116320219A (en)*2022-09-092023-06-23北京奕斯伟计算技术股份有限公司Image stitching method and related device
WO2023173572A1 (en)*2022-03-172023-09-21浙江大学Real-time panoramic imaging method and device for underwater cleaning robot
CN116912147A (en)*2023-08-032023-10-20西安交通大学 A real-time splicing method of panoramic video based on embedded platform
CN117237192A (en)*2023-09-252023-12-15中国人民解放军61540部队Full-frame image stitching method and device for field-of-view segmentation integrated area array camera
CN117274393A (en)*2023-08-232023-12-22西安中科创达软件有限公司 Determination method, device, equipment and storage medium of camera external parameter calibration coefficient
CN117455767A (en)*2023-12-262024-01-26深圳金三立视频科技股份有限公司 Panoramic image stitching method, device, equipment and storage medium
CN117726559A (en)*2023-12-142024-03-19江苏北方湖光光电有限公司Luminance self-adaptive matching method based on low-illumination multi-view image stitching
CN117952826A (en)*2023-04-282024-04-30深圳市裕同包装科技股份有限公司 Image stitching method, device, equipment, medium and program product
CN118014832A (en)*2024-04-092024-05-10深圳精智达技术股份有限公司Image stitching method and related device based on linear feature invariance
CN118247142A (en)*2024-04-152024-06-25四川新视创伟超高清科技有限公司Multi-view splicing method and system applied to large-view-field monitoring scene
CN118828201A (en)*2024-08-302024-10-22四川国创新视超高清视频科技有限公司 A distributed online fusion and stitching method for cameras in large-scale scene monitoring
CN119228786A (en)*2024-11-282024-12-31杭州宇泛智能科技股份有限公司 Panoramic identification method, system and electronic equipment for steel bar binding points
CN119379815A (en)*2024-12-262025-01-28南京达道电子科技有限公司 Camera extrinsic calibration result correction method and system based on OpenCV Eigen
CN119478049A (en)*2025-01-092025-02-18深圳精智达技术股份有限公司 A method, system and related device for stitching images taken by multiple cameras for appearance inspection

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101710932A (en)*2009-12-212010-05-19深圳华为通信技术有限公司Image stitching method and device
CN104574339A (en)*2015-02-092015-04-29上海安威士科技股份有限公司Multi-scale cylindrical projection panorama image generating method for video monitoring
CN105447850A (en)*2015-11-122016-03-30浙江大学Panorama stitching synthesis method based on multi-view images
CN106339981A (en)*2016-08-252017-01-18安徽协创物联网技术有限公司Panorama stitching method
CN109064404A (en)*2018-08-102018-12-21西安电子科技大学It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system
CN109523492A (en)*2019-01-242019-03-26重庆邮电大学The irregular distortion universe bearing calibration of wide angle camera
CN109785371A (en)*2018-12-192019-05-21昆明理工大学A kind of sun image method for registering based on normalized crosscorrelation and SIFT

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101710932A (en)*2009-12-212010-05-19深圳华为通信技术有限公司Image stitching method and device
CN104574339A (en)*2015-02-092015-04-29上海安威士科技股份有限公司Multi-scale cylindrical projection panorama image generating method for video monitoring
CN105447850A (en)*2015-11-122016-03-30浙江大学Panorama stitching synthesis method based on multi-view images
CN106339981A (en)*2016-08-252017-01-18安徽协创物联网技术有限公司Panorama stitching method
CN109064404A (en)*2018-08-102018-12-21西安电子科技大学It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system
CN109785371A (en)*2018-12-192019-05-21昆明理工大学A kind of sun image method for registering based on normalized crosscorrelation and SIFT
CN109523492A (en)*2019-01-242019-03-26重庆邮电大学The irregular distortion universe bearing calibration of wide angle camera

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
应礼剑: "《基于多摄像机的360度全景图像拼接技术研究》", vol. 2015, no. 12*
盛安宇: "多视角摄像机视频拼接算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2017, pages 138 - 1209*
赵岩: "结合投影误差校正的快速SIFT图像拼接", vol. 25, no. 25, pages 1645 - 1650*
马嘉琳;张锦明;孙卫新;: "基于相机标定的全景图拼接方法研究", vol. 29, no. 05, pages 1112 - 1119*

Cited By (88)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111369495A (en)*2020-02-172020-07-03珀乐(北京)信息科技有限公司Video-based panoramic image change detection method
CN111369495B (en)*2020-02-172024-02-02珀乐(北京)信息科技有限公司Panoramic image change detection method based on video
CN113674145A (en)*2020-05-152021-11-19北京大视景科技有限公司 Spherical stitching and real-time alignment of PTZ moving images
CN113674145B (en)*2020-05-152023-08-18北京大视景科技有限公司Spherical surface splicing and real-time alignment method for PTZ (pan-tilt-zoom) moving image
CN111627008B (en)*2020-05-272023-09-12深圳市华汉伟业科技有限公司Object surface detection method and system based on image fusion and storage medium
CN111627008A (en)*2020-05-272020-09-04深圳市华汉伟业科技有限公司Object surface detection method and system based on image fusion and storage medium
CN111798374A (en)*2020-06-242020-10-20浙江大华技术股份有限公司Image splicing method, device, equipment and medium
CN111915482A (en)*2020-06-242020-11-10福建(泉州)哈工大工程技术研究院Image splicing method suitable for fixed scene
CN111915482B (en)*2020-06-242022-08-05福建(泉州)哈工大工程技术研究院Image splicing method suitable for fixed scene
CN112034198A (en)*2020-07-032020-12-04朱建国High-shooting-speed bullet continuous-firing initial speed measuring method
CN111815517A (en)*2020-07-092020-10-23苏州万店掌网络科技有限公司Self-adaptive panoramic stitching method based on snapshot pictures of dome camera
CN111899174A (en)*2020-07-292020-11-06北京天睿空间科技股份有限公司Single-camera rotation splicing method based on deep learning
CN112001844A (en)*2020-08-182020-11-27南京工程学院Acquisition device for acquiring high-definition images of rice planthoppers and rapid splicing method
CN112102168A (en)*2020-09-032020-12-18成都中科合迅科技有限公司Image splicing method and system based on multiple threads
CN112085659B (en)*2020-09-112023-01-06中德(珠海)人工智能研究院有限公司Panorama splicing and fusing method and system based on dome camera and storage medium
CN112085659A (en)*2020-09-112020-12-15中德(珠海)人工智能研究院有限公司 A panorama stitching fusion method, system and storage medium based on spherical screen camera
CN112188163A (en)*2020-09-292021-01-05厦门汇利伟业科技有限公司Method and system for automatic de-duplication splicing of real-time video images
CN112381710A (en)*2020-10-132021-02-19中铭谷智能机器人(广东)有限公司2D vision algorithm system for automobile plate spraying
CN112308777A (en)*2020-10-162021-02-02易思维(杭州)科技有限公司Rapid image splicing method for plane and plane-like parts
CN112419383B (en)*2020-10-302023-07-28中山大学 Method, device and storage medium for generating a depth map
CN112419383A (en)*2020-10-302021-02-26中山大学Depth map generation method and device and storage medium
CN112308986B (en)*2020-11-032024-04-12豪威科技(武汉)有限公司Vehicle-mounted image stitching method, system and device
CN112308986A (en)*2020-11-032021-02-02豪威科技(武汉)有限公司Vehicle-mounted image splicing method, system and device
CN112449093A (en)*2020-11-052021-03-05北京德火科技有限责任公司Three-dimensional panoramic video fusion monitoring platform
CN112437327A (en)*2020-11-232021-03-02北京瞰瞰科技有限公司Real-time panoramic live broadcast splicing method and system
CN112437327B (en)*2020-11-232023-05-16瞰瞰技术(深圳)有限公司Real-time panoramic live broadcast splicing method and system
CN114559131A (en)*2020-11-272022-05-31北京颖捷科技有限公司Welding control method and device and upper computer
CN113063704B (en)*2020-12-042022-03-11湖北沛丰生物科技股份有限公司Particle fullness analysis platform and method
CN113063704A (en)*2020-12-042021-07-02泰州市朗嘉馨网络科技有限公司Particle fullness analysis platform and method
CN112862674A (en)*2020-12-072021-05-28西安电子科技大学Automatic Stitch algorithm-based multi-image automatic splicing method and system
CN114612311A (en)*2020-12-072022-06-10中国科学院长春光学精密机械与物理研究所 Ring 2π space imaging and seamless stitching method
CN112862674B (en)*2020-12-072024-02-13西安电子科技大学Multi-image automatic splicing method and system
CN112581369A (en)*2020-12-242021-03-30中国银联股份有限公司Image splicing method and device
CN112712037A (en)*2020-12-312021-04-27苏州清研微视电子科技有限公司Vehicle-mounted environment sensing method and system based on panoramic image and target detection
CN112954234A (en)*2021-01-282021-06-11天翼物联科技有限公司Method, system, device and medium for multi-video fusion
CN115082305A (en)*2021-03-152022-09-20爱思开海力士有限公司Apparatus and method for generating panoramic image
CN113055613A (en)*2021-03-182021-06-29上海云话科技有限公司Panoramic video stitching method and device based on mine scene
CN113052119B (en)*2021-04-072024-03-15兴体(广州)智能科技有限公司Ball game tracking camera shooting method and system
CN113052119A (en)*2021-04-072021-06-29兴体(广州)智能科技有限公司Ball motion tracking camera shooting method and system
CN113221665A (en)*2021-04-192021-08-06东南大学Video fusion algorithm based on dynamic optimal suture line and improved gradual-in and gradual-out method
CN113222878A (en)*2021-06-042021-08-06杭州海康威视数字技术股份有限公司Image splicing method
CN113222878B (en)*2021-06-042023-09-05杭州海康威视数字技术股份有限公司Image stitching method
CN115439547B (en)*2021-08-182025-08-05北京车和家信息技术有限公司 Camera calibration method, device, image stitching method, camera and vehicle
CN115439547A (en)*2021-08-182022-12-06北京车和家信息技术有限公司 Camera calibration method, device, image stitching method, camera and vehicle
CN113781373A (en)*2021-08-262021-12-10云从科技集团股份有限公司Image fusion method, device and computer storage medium
CN113781373B (en)*2021-08-262024-08-23云从科技集团股份有限公司Image fusion method, device and computer storage medium
CN113689339A (en)*2021-09-082021-11-23北京经纬恒润科技股份有限公司Image splicing method and device
CN113689339B (en)*2021-09-082023-06-20北京经纬恒润科技股份有限公司 Image splicing method and device
CN113810665A (en)*2021-09-172021-12-17北京百度网讯科技有限公司Video processing method, device, equipment, storage medium and product
CN114022562A (en)*2021-10-252022-02-08同济大学 A panoramic video stitching method and device for maintaining pedestrian integrity
CN114007014A (en)*2021-10-292022-02-01北京环境特性研究所Method and device for generating panoramic image, electronic equipment and storage medium
CN114007014B (en)*2021-10-292023-06-16北京环境特性研究所Method and device for generating panoramic image, electronic equipment and storage medium
CN114092706A (en)*2021-11-112022-02-25浩云科技股份有限公司 A sports panoramic football video recording method, system, storage medium and terminal device
WO2023104115A1 (en)*2021-12-102023-06-15华为技术有限公司Panoramic video acquiring method, apparatus and system, device, and storage medium
CN114331835A (en)*2021-12-152022-04-12中国飞行试验研究院Panoramic image splicing method and device based on optimal mapping matrix
WO2023173572A1 (en)*2022-03-172023-09-21浙江大学Real-time panoramic imaging method and device for underwater cleaning robot
CN114418862A (en)*2022-03-312022-04-29苏州挚途科技有限公司Method, device and system for splicing side images
CN114972023A (en)*2022-04-212022-08-30合众新能源汽车有限公司 Image mosaic processing method, device, equipment and computer storage medium
CN114998105A (en)*2022-06-022022-09-02成都弓网科技有限责任公司Monitoring method and system based on multi-camera pantograph video image splicing
CN115086629A (en)*2022-06-102022-09-20谭健Sphere multi-lens real-time panoramic three-dimensional imaging system
CN115086629B (en)*2022-06-102024-02-27谭健Real-time panoramic three-dimensional imaging system with multiple spherical lenses
CN115050004B (en)*2022-06-132025-06-27江苏范特科技有限公司 Pedestrian cross-mirror positioning method, system and medium based on top-view camera
CN115050004A (en)*2022-06-132022-09-13江苏范特科技有限公司Pedestrian mirror-crossing positioning method, system and medium based on top view camera
CN115222591A (en)*2022-06-222022-10-21中国科学院苏州生物医学工程技术研究所Rapid multi-eye fisheye image and video stitching method irrelevant to camera equipment parameters
CN115222596A (en)*2022-07-152022-10-21山东中博智云计算机科技有限公司 Image stitching method and device for jointly shooting large-size displays with line scan cameras
CN116320219A (en)*2022-09-092023-06-23北京奕斯伟计算技术股份有限公司Image stitching method and related device
CN115496722A (en)*2022-09-222022-12-20广西成电智能制造产业技术有限责任公司On-line splicing and improving method for splicing quality of vehicle-mounted panoramic image
CN116016816A (en)*2022-12-132023-04-25之江实验室Embedded GPU zero-copy panoramic image stitching method and system for improving L-ORB algorithm
CN116016816B (en)*2022-12-132024-03-29之江实验室Embedded GPU zero-copy panoramic image stitching method and system for improving L-ORB algorithm
CN117952826A (en)*2023-04-282024-04-30深圳市裕同包装科技股份有限公司 Image stitching method, device, equipment, medium and program product
CN116912147A (en)*2023-08-032023-10-20西安交通大学 A real-time splicing method of panoramic video based on embedded platform
CN116912147B (en)*2023-08-032025-05-06西安交通大学Panoramic video real-time splicing method based on embedded platform
CN117274393A (en)*2023-08-232023-12-22西安中科创达软件有限公司 Determination method, device, equipment and storage medium of camera external parameter calibration coefficient
CN117237192A (en)*2023-09-252023-12-15中国人民解放军61540部队Full-frame image stitching method and device for field-of-view segmentation integrated area array camera
CN117237192B (en)*2023-09-252024-05-31中国人民解放军61540部队Full-frame image stitching method and device for field-of-view segmentation integrated area array camera
CN117726559A (en)*2023-12-142024-03-19江苏北方湖光光电有限公司Luminance self-adaptive matching method based on low-illumination multi-view image stitching
CN117455767B (en)*2023-12-262024-05-24深圳金三立视频科技股份有限公司Panoramic image stitching method, device, equipment and storage medium
CN117455767A (en)*2023-12-262024-01-26深圳金三立视频科技股份有限公司 Panoramic image stitching method, device, equipment and storage medium
CN118014832A (en)*2024-04-092024-05-10深圳精智达技术股份有限公司Image stitching method and related device based on linear feature invariance
CN118014832B (en)*2024-04-092024-07-26深圳精智达技术股份有限公司Image stitching method and related device based on linear feature invariance
CN118247142A (en)*2024-04-152024-06-25四川新视创伟超高清科技有限公司Multi-view splicing method and system applied to large-view-field monitoring scene
CN118247142B (en)*2024-04-152024-09-24四川国创新视超高清视频科技有限公司Multi-view splicing method and system applied to large-view-field monitoring scene
CN118828201A (en)*2024-08-302024-10-22四川国创新视超高清视频科技有限公司 A distributed online fusion and stitching method for cameras in large-scale scene monitoring
CN119228786A (en)*2024-11-282024-12-31杭州宇泛智能科技股份有限公司 Panoramic identification method, system and electronic equipment for steel bar binding points
CN119379815B (en)*2024-12-262025-03-25南京达道电子科技有限公司 Camera extrinsic calibration result correction method and system based on OpenCV Eigen
CN119379815A (en)*2024-12-262025-01-28南京达道电子科技有限公司 Camera extrinsic calibration result correction method and system based on OpenCV Eigen
CN119478049B (en)*2025-01-092025-05-09深圳精智达技术股份有限公司 A method, system and related device for stitching images taken by multiple cameras for appearance inspection
CN119478049A (en)*2025-01-092025-02-18深圳精智达技术股份有限公司 A method, system and related device for stitching images taken by multiple cameras for appearance inspection

Similar Documents

PublicationPublication DateTitle
CN110782394A (en)Panoramic video rapid splicing method and system
CN113221665B (en) A video fusion algorithm based on dynamic optimal stitching line and improved fade-in and fade-out method
CN112085659B (en)Panorama splicing and fusing method and system based on dome camera and storage medium
CN109064404A (en)It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system
CN103971375B (en)A kind of panorama based on image mosaic stares camera space scaling method
CN111815517B (en)Self-adaptive panoramic stitching method based on snapshot pictures of dome camera
WO2018235163A1 (en) Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method
CN107038724A (en)Panoramic fisheye camera image correction, synthesis and depth of field reconstruction method and system
CN107424118A (en)Based on the spherical panorama mosaic method for improving Lens Distortion Correction
CN108200360A (en)A kind of real-time video joining method of more fish eye lens panoramic cameras
CN111461963A (en)Fisheye image splicing method and device
CN107578450B (en)Method and system for calibrating assembly error of panoramic camera
JP2003179800A (en)Device for generating multi-viewpoint image, image processor, method and computer program
CN114331835B (en) A panoramic image stitching method and device based on optimal mapping matrix
CN107845056A (en)Fish eye images panorama generation method based on cylinder model
CN111866523A (en)Panoramic video synthesis method and device, electronic equipment and computer storage medium
CN110211220A (en)The image calibration suture of panorama fish eye camera and depth reconstruction method and its system
CN110544203A (en) A Parallax Image Mosaic Method Combining Motion Least Squares and Line Constraints
Babbar et al.Homography theories used for image mapping: a review
CN108921787A (en)Photovoltaic module image split-joint method based on infrared video
CN114972025A (en)Image fast splicing method based on YUV color space
CN117596350A (en)Video stitching method and device based on low-overlap region scene
CN114463170A (en) A large scene image stitching method for AGV applications
CN111862241B (en)Human body alignment method and device
Zhu et al.Expanding a fish-eye panoramic image through perspective transformation

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20200211


[8]ページ先頭

©2009-2025 Movatter.jp