Movatterモバイル変換


[0]ホーム

URL:


CN116309063A - Correction information generation method, image stitching method and device and image acquisition system - Google Patents

Correction information generation method, image stitching method and device and image acquisition system
Download PDF

Info

Publication number
CN116309063A
CN116309063ACN202310265652.7ACN202310265652ACN116309063ACN 116309063 ACN116309063 ACN 116309063ACN 202310265652 ACN202310265652 ACN 202310265652ACN 116309063 ACN116309063 ACN 116309063A
Authority
CN
China
Prior art keywords
image
images
correction information
imaging
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310265652.7A
Other languages
Chinese (zh)
Inventor
曲涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weihai Hualing Opto Electronics Co Ltd
Original Assignee
Weihai Hualing Opto Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weihai Hualing Opto Electronics Co LtdfiledCriticalWeihai Hualing Opto Electronics Co Ltd
Priority to CN202310265652.7ApriorityCriticalpatent/CN116309063A/en
Publication of CN116309063ApublicationCriticalpatent/CN116309063A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The application provides a correction information generation method, an image stitching device and an image acquisition system, wherein the correction information generation method comprises the following steps: acquiring at least two first images which are arranged at intervals along a first direction, determining at least two standard points on each first image, wherein the at least two first images are generated by scanning a first target, and any two adjacent first images are provided with overlapping scanning parts; determining correction information, wherein the correction information is at least one of the following information: the rotation angle of each first image relative to the first direction, the number of overlapping pixels of any two adjacent first images along the first direction, and the number of offset rows of each first image along a second direction perpendicular to the first direction. The method and the device can effectively improve the splicing accuracy of a plurality of scanned images acquired by the existing array type image detection device.

Description

Correction information generation method, image stitching method and device and image acquisition system
Technical Field
The application relates to the technical field of image detection and digital image processing, and particularly provides a correction information generation method, a method and a device for image stitching by using correction information generated by the method, and an image acquisition system.
Background
The contact image sensor (Contact Image Sensor, CIS) is a short-distance image acquisition device widely applied to the industrial production and civil fields, and can be used for acquiring and detecting images of surface textures, flaws and the like of products in industrial production. In general, the resolution of a CIS-acquired image is expressed in DPI (dots per inch of pixels), which is determined by the size of each photosensitive element in the photosensitive chip that it uses, i.e., the intensity of light that each photosensitive element acquires corresponds to a pixel on the scanned imaged image that has a specific intensity value (for gray-scale images, generally expressed in gray-scale values).
There are various methods to increase the resolution of CIS scanning imaging to enable higher definition image acquisition and detection. For example, a smaller size photosensitive element is used so that more pixels can be accommodated per unit size, however, the above method will greatly increase the cost of CIS, including material cost and manufacturing cost.
In addition, the image can be amplified and acquired on the basis of not changing the existing CIS structure by an optical amplification imaging means, so that the effect of improving the resolution is achieved, for example, the invention patent application with the application number of CN202211180946.1 filed by the same applicant is designed, and all amplification imaging modules of different arrays are provided with a certain scanning overlapping area on an object plane by designing the amplification imaging modules alternately arranged at intervals, so that the full coverage amplification imaging and acquisition of the image to be detected in the object plane are realized. Obviously, the original images acquired by each amplifying imaging module in the image acquisition device cannot be directly spliced to obtain a full-width amplified image, and corresponding splicing processing is performed according to the corresponding object plane acquisition area.
However, in the process of stitching the scanned images acquired by the image detection device, it is found that the stitching image obtained by processing the scanned images by using a conventional image stitching algorithm still cannot well achieve alignment of each portion, for example, the image features that should be continuous are broken, or the same image features repeatedly appear in different areas of the image.
Therefore, it is necessary to purposefully correct the existing image stitching method for the root cause of the different problems, so as to obtain a more optimized stitching result of the original image acquired by the device.
Disclosure of Invention
The present application aims to solve the problems in the conventional image stitching method, and provides a correction information generating method, a method and a device for image stitching by using correction information generated by the method, and an image acquisition system.
A first aspect of the present application provides a correction information generation method, including the steps of:
acquiring at least two first images which are arranged at intervals along a first direction, determining at least two standard points on each first image, wherein the at least two first images are generated by scanning a first target, and any two adjacent first images are provided with overlapping scanning parts;
Determining correction information, wherein the correction information is at least one of the following information: the rotation angle of each first image relative to the first direction, the number of overlapping pixels of any two adjacent first images along the first direction, and the number of offset rows of each first image along a second direction perpendicular to the first direction.
Preferably, the rotation angle of each first image relative to the first direction is determined from the ratio of the distance in the second direction on the first image of two calibration points on the first image to the distance in the first direction.
Preferably, the number of overlapping pixels of any two adjacent first images along the first direction is determined by: determining real coordinate values of two ends of each first image along the first direction, corresponding to the first target along the first direction, according to at least two calibration points on each first image; determining the corresponding real overlapping length of the overlapping scanning part between each first image and the adjacent first image on the first target based on the real coordinate values; the true overlap length is converted into the number of overlapping pixels of each first image and the adjacent first images along the first direction.
Preferably, the number of offset lines of each first image in the second direction is determined by: calculating the mean value of imaging pixel coordinate values of at least two calibration points on each first image along the second direction; determining an offset row number maximum based on the average value; and determining the offset line number of each first image along the second direction based on the average value and the maximum value of the offset line numbers.
Preferably, before determining the number of overlapping pixels of any two adjacent first images in the first direction and before determining the number of offset lines of each first image in the second direction, rotation correction is performed on each first image according to the rotation angle of each first image with respect to the first direction.
Preferably, the correction information generating method further includes the steps of:
recording the positions of the various calibration points of the first target; alternatively, the position of the first object is corrected before the first object is scanned.
A second aspect of the present application provides an image stitching method, including the steps of:
acquiring correction information, wherein the correction information is generated by using the correction information generation method;
acquiring at least two second images which are arranged at intervals along the first direction, wherein the at least two second images are generated by scanning a second target, and the mode of scanning the second target is the same as that of scanning a first target;
Correcting each second image using the correction information;
and splicing the corrected second images.
Preferably, the correcting each second image is at least one of: performing rotation correction on the corresponding second image according to the rotation angle of each first image relative to the first direction; cutting and correcting the two adjacent second images corresponding to any two adjacent first images according to the number of overlapping pixel points of the two adjacent first images along the first direction; and performing displacement correction on the corresponding second image according to the offset line number of each first image along the second direction.
A third aspect of the present application provides an image stitching device, comprising:
a correction information acquisition module for acquiring correction information, wherein the correction information is generated by using the correction information generation method;
the second image acquisition module is used for acquiring at least two second images which are arranged at intervals along the first direction, the at least two second images are generated by scanning a second target, and the mode of scanning the second target is the same as the mode of scanning the first target;
a correction module for correcting each second image using the correction information;
And the splicing module is used for splicing the corrected second images.
A fourth aspect of the present application provides an image acquisition system, including an imaging unit and an image processing unit;
the imaging unit comprises two imaging arrays fixedly arranged at intervals along a second direction, each imaging array comprises at least one imaging module and is used for imaging an image on an object plane on the image plane and collecting the image as an electric signal, and any one imaging module included in each imaging array and at least one imaging module included in the other imaging array are provided with overlapped image collecting areas along the first direction;
the image processing unit comprises a data conversion module, a preprocessing module, a correction information generation module and an image splicing module;
the data conversion module converts the electric signals acquired by each imaging module from analog signals to digital signals;
the preprocessing module constructs a scanning image acquired by each imaging module based on the digital signals;
the correction information generating module determines the correction information corresponding to each imaging module by using the correction information generating method;
the image stitching module corrects and stitches the scanning images acquired by each imaging module by using the image stitching method, and generates a complete scanning image of the image positioned on the object plane.
According to the technical scheme, through comparing the difference between the actual positions of the calibration points contained in the first target and the positions on the scanned images, corresponding accurate correction information is obtained according to the angle deviation, the increase and decrease of the overlapping amount and the position deviation which are additionally generated beyond the ideal correction amount and are caused by the assembly tolerance, the specification difference and the like of the imaging device, and the existing image stitching method is improved on the basis, so that the stitching accuracy of a plurality of scanned images acquired by using the array image detection device is effectively improved.
Drawings
Fig. 1 is a schematic structural diagram of a conventional CIS module;
FIG. 2 is a side cross-sectional view of a conventional array image sensing device along a scanning sub-direction;
FIG. 3 is a side cross-sectional view of an array of imaging modules of the array image detection apparatus of FIG. 1 along a scan direction;
FIG. 4 is a schematic diagram showing the corresponding distribution of the image acquisition area on the object plane and the image imaging area on the image plane of the array image detection device shown in FIG. 1;
FIG. 5a is an original scanned image of a ruler acquired by the array image detection device of FIG. 1;
FIG. 5b is a processing result of conventional image stitching of the original scanned image of the ruler shown in FIG. 5 a;
FIG. 6 is an enlarged view of a portion of FIG. 5b where a splice-out area exists;
FIG. 7 is a flowchart of an implementation of a correction information generation method according to some embodiments of the present application;
FIG. 8 is a schematic illustration of determining a calibration point on a plurality of first images according to some embodiments of the present application;
FIG. 9 is a flowchart of an implementation of an image stitching method according to some embodiments of the present application;
FIG. 10 is a result of stitching a plurality of second images according to an image stitching method of some embodiments of the present application;
FIG. 11 is a schematic frame structure of an image stitching device according to some embodiments of the present application;
fig. 12 is a side cross-sectional view of image acquisition along a scan sub-direction according to some embodiments of the present application.
Detailed Description
The present application will be further described below based on preferred embodiments with reference to the accompanying drawings.
In addition, various components on the drawings are enlarged or reduced for ease of understanding, but this is not intended to limit the scope of the present application.
In the description of the embodiments of the present application, it should be noted that, if the terms "upper," "lower," "inner," "outer," and the like indicate an azimuth or a positional relationship based on the azimuth or the positional relationship shown in the drawings, or an azimuth or a positional relationship that a product of the embodiments of the present application conventionally puts in use, it is merely for convenience of describing the present application and simplifying the description, and does not indicate or imply that the device or element to be referred to must have a specific azimuth, be configured and operated in a specific azimuth, and therefore should not be construed as limiting the present application. Furthermore, in the description of the present application, the terms first, second, etc. are used herein for distinguishing between different elements, but not necessarily for describing a sequential or chronological order of manufacture, and may not be construed to indicate or imply a relative importance, and their names may be different in the detailed description of the present application and the claims.
The terminology used in this description is for the purpose of describing the embodiments of the present application and is not intended to be limiting of the present application. It should also be noted that unless explicitly stated or limited otherwise, the terms "disposed," "connected," and "connected" should be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; the two components can be connected mechanically, directly or indirectly through an intermediate medium, and can be communicated internally. The specific meaning of the terms in this application will be specifically understood by those skilled in the art.
In order to more clearly explain the technical details of the technical scheme of the application, firstly, the problems and the reasons for generating the problems in the splicing process of the images acquired by the image acquisition device in the prior art are explained.
Fig. 1 shows a schematic diagram of a conventionalCIS module 80, and as shown in fig. 1, theCIS module 80 includes asubstrate 82 and animaging chip 81 disposed on a surface of thesubstrate 82 facing an image to be detected; further, theimaging chip 81 includes a plurality ofphotosensitive elements 811 arranged at intervals along an X axis, which is perpendicular to the Y axis and the Z axis, respectively (in an actual image scanning process, an X axis direction is generally referred to as a scanning direction, and a Y axis direction is referred to as a scanning sub-direction).
The process of image acquisition on the surface of various objects by theCIS module 80 is generally referred to as scanning the objects, specifically, the light emitted from the surfaces of the objects enters eachphotosensitive element 811, thephotosensitive elements 811 convert the received optical signals with different intensities into electrical signals with different intensities, and then the electrical signals are subjected to analog-to-digital conversion and other processes to generate corresponding gray values. Obviously, the gray value collected and converted by eachphotosensitive element 811 corresponds to one pixel after imaging, and the number ofphotosensitive elements 811 included in a unit length is the resolution (generally in DPI) of theCIS module 80, so that theCIS module 80 will obtain a gray value sequence of a plurality of pixels in a row for each scan.
Further, the object and/or theCIS module 80 are moved relatively along the scanning sub-direction, and theCIS module 80 is continuously controlled to scan at a certain time interval, and the scanning image of the object surface represented by gray scale can be obtained by constructing a two-dimensional matrix of the multi-line scanning result.
The resolution of theCIS module 80 determines the definition of the scanning image, for example, acommon CIS module 80 has a resolution of 1200DPI, and the pixel size of theCIS module 80 is 25.4mm/1200≡21um, that is, the CIS module can clearly distinguish the image features of the object surface greater than 21 um.
When the resolution of the scanning imaging is required to exceed the resolution of theCIS module 80 itself, the CIS module having higher resolution can be replaced, however, in the industrial and civil fields, the use of the CIS module having higher resolution means a significant increase in manufacturing and use costs, and for this reason, the applicant has proposed in patent application CN202211180946.1 a device for implementing higher resolution image scanning by combining magnifying lenses and performing an alternate array arrangement while maintaining the resolution of the existingCIS module 80. Fig. 2 and 3 are side sectional views of the array image detection device along a scanning sub-direction (denoted by Y-axis direction in the figure) and a scanning direction (denoted by X-axis direction in the figure), respectively, and fig. 4 is a schematic diagram showing the corresponding distribution of the image acquisition area of the array image detection device on the object plane and the image imaging area on the image plane. In the embodiment of the present application, the X-axis direction is taken as a first direction, and the Y-axis direction is taken as a second direction. Note that, in the drawings of the present application, the X-axis and Y-axis directions are different from those of the drawings of the invention patent application CN202211180946.1, but changing the X-axis and Y-axis marks merely conforms to the general habit of the representation of the coordinates of the scanned image in the scheme of the present application, and does not substantially change the technical scheme.
As shown in fig. 2 to 4, the array type image detection device includes afirst imaging array 1 and asecond imaging array 2 arranged at predetermined intervals in the Y-axis direction. Thefirst imaging array 1 includes a plurality of first imaging modules arranged at intervals along the X-axis direction. Each first imaging module in thefirst imaging array 1 includes afirst diaphragm 13, afirst magnifying lens 11, and afirst imaging chip 12 arranged in order along an optical axis thereof (denoted by a first optical axis 14). Because of the magnifying effect of thefirst magnifying lens 11, each first imaging module has a firstimage capturing area 16 and a corresponding magnified imaged firstimage imaging area 15 on two sides of thefirst magnifying lens 11, wherein an image to be detected located in the firstimage capturing area 16 can be magnified and imaged in the corresponding firstimage imaging area 15, and is acquired by thefirst imaging chip 12 located in the firstimage imaging area 15.
Similarly, thesecond imaging array 2 includes a plurality of second imaging modules arranged at intervals along the X-axis direction, wherein each of the second imaging modules includes asecond diaphragm 23, asecond magnifying lens 21, and asecond imaging chip 22 arranged in order along an optical axis thereof (denoted by a second optical axis 24). Accordingly, a secondimage acquisition region 26 and a corresponding secondimage imaging region 25 for enlarged imaging are formed on both sides of the secondoptical axis 24, and the image to be detected in the secondimage acquisition region 26 can be imaged in the corresponding secondimage imaging region 25 for enlarged imaging, and is acquired by thesecond imaging chip 22 located in the secondimage imaging region 25.
Thefirst imaging chip 12 and thesecond imaging chip 22 used in the array image detection device may adopt theCIS module 80 shown in fig. 1, for example, when the magnification of the magnifying lens is 1.67 times, the magnified imaging of the detail features of the image on the object plane about 12um can be realized, that is, the resolution of 2000DPI can be reached, so that the scanning imaging with higher resolution can be realized without changing the actual resolution of theCIS module 80.
In addition, in some preferred embodiments, the array type image detection device further includes hollowouter frames 31 and secondouter frames 32, aheat dissipation plate 33, apartition 34, and a light source module 4.
Further, the array image detection device further includes adata conversion module 51 and adata processing module 602, thedata conversion module 51 is used for performing AD conversion on analog electrical signals output by the imaging chips, thedata processing module 602 has an image processing function, and processes digital signals received from the data conversion module to synthesize a detection image. The specific embodiments of the above parts have been described in detail by the description of CN 202211180946.1.
After each imaging module in the array image detection device performs enlarged scanning on differentimage acquisition areas 16 and 26 and images the images in theimage acquisition areas 15 and 25, a plurality of scanned images need to be spliced by adata processing module 602, wherein, as shown in fig. 4, thedata processing module 602 needs to perform cutting in the X-axis direction and translation in the Y-axis direction on the plurality of scanned images respectively, so as to realize de-duplication in the X-axis direction and alignment in the Y-axis direction.
Obviously, after the specification and the positional relationship of each component of the array image detection device are determined, the number of pixels to be cut in the X-axis direction and the number of translation lines in the Y-axis direction are determined, wherein: the number of pixels cut is determined by the product of the overlap length Δx between the adjacent first and secondimage acquisition regions 16, 26 in the X-axis direction and the magnification of the imaging module, and the number of translation lines is determined by the product of the distance between the first and secondimage acquisition regions 16, 26 in the Y-axis direction and the magnification.
In the practical use process, the applicant finds that even if the scanned images acquired by each imaging module are processed according to the rule, the images still cannot be aligned after being spliced. Fig. 5a shows that in a specific embodiment, after the array image detection device scans a ruler subjected to precision detection by using 6 imaging modules as shown in fig. 4, 6 original scanned images are output (the 6 original scanned images are sequentially displayed in the figure in an end-to-end mode), and fig. 5b is an image generated after theimage processing module 602 performs stitching by using a conventional image stitching method based on the design specification of the array image detection device, wherein in the case that the object distance is 25mm, the theoretical image acquisition area length is 11mm according to the specification of an imaging chip. Meanwhile, according to design and assembly specifications, the overlapping portion of adjacent scanned images is Δx=0.5 mm, that is, in an ideal case, each image should correspond to a length range of 10.5mm along the X-axis direction in the image acquisition area after being finally processed, and there is no overlapping or offset between the images. However, due to the inconsistencies of the lens, imaging chip, and other components, such as mounting, processing precision, and tilting, the images obtained by the above-mentioned stitching algorithm still have the phenomena of up-and-down positional deviation, scaling up and scaling down, and the like.
FIG. 6 shows, in partial enlargement, the partial deviation phenomenon of FIG. 5a, such as that two adjacent scanned images, even though they have undergone a translation process along the Y-axis, still have deviations along the Y-axis (denoted by B1 in the figure); as another example, although the image cutting process is performed, the same image features are still present in two adjacent images (the same features of the object surface respectively appearing in the two adjacent scanned images are denoted by A1 and A2 in the figure). These misaligned portions will not accurately reflect the surface image features of the object to be detected, and thus seriously affect the accuracy of the image detection result, so that it is necessary to improve the existing image stitching method to eliminate the problems caused by the uncertainties in the processing and assembling processes.
Therefore, the embodiment of the application provides a correction information generation method, which is used for improving the defects of the image stitching algorithm of the existing array type image acquisition equipment, accurately acquiring the influence degree of uncertain factors in the manufacturing and installation processes of products on a scanned image, and providing more accurate correction information for stitching of a plurality of scanned images.
Fig. 7 shows a flow chart of an implementation of the correction information generation method, in some embodiments, as shown, comprising the steps of:
acquiring at least two first images which are arranged at intervals along a first direction, determining at least two standard points on each first image, wherein the at least two first images are generated by scanning a first target, and any two adjacent first images are provided with overlapping scanning parts;
determining correction information, wherein the correction information is at least one of the following information: the rotation angle of each first image relative to the first direction, the number of overlapping pixels of any two adjacent first images along the first direction, and the number of offset rows of each first image along a second direction perpendicular to the first direction.
The specific implementation procedure of the correction information generation method will be described in detail with reference to the preferred embodiments.
The first step of the correction information generation method provided by the application is to acquire at least two first images generated by scanning a first target and determine a calibration point on the first images. Wherein the first object refers to an object whose surface has a plurality of calibration points capable of acquiring accurate coordinate information, for example: a ruler with a plurality of interval marks which are determined and accurate through detection, a correction pattern which is printed by a printer and is provided with a plurality of calibration points, and the coordinate value of each calibration point relative to a preset origin point is accurately measured, and the like. The first object may be scanned using an array type image detection device as shown in fig. 2 and 3, and a specific scanning manner is described in detail above.
After the first object is scanned, at least two first images are obtained, which are arranged at intervals along the first direction (in the embodiment of the present application, the scanning direction, that is, the X-axis direction is the first direction), and the embodiment shown in fig. 5a is still described as an example, where the ruler subjected to accuracy detection in the embodiment shown in fig. 5a is the first object, and in the embodiment of the present application, the original scanned image obtained by scanning the first object is called the first image, and obviously, 6 first images are included in total in fig. 5 a.
Fig. 8 shows a schematic diagram for determining calibration points on the plurality of first images shown in fig. 5a, and as can be seen from fig. 8, each of the 6 first images includes a portion for repeatedly scanning the first object along the X-axis direction, and each of the first images may have a different angular rotation relative to the first object and a shift from each other along the Y-axis direction (in the embodiment of the present application, the scanning sub-direction, i.e. the Y-axis direction is the second direction), and in the embodiment of the present application, the information for correcting the various stitching misalignment is determined by comparing the difference between the true coordinate value of each calibration point on the first object and the imaging pixel coordinate value on each first image.
Taking thefirst image 101 and the secondfirst image 102 from left to right in fig. 8 as an example, the imaging pixels of the two calibration points D1 and D2 with a certain distance can be found on thefirst image 101, which respectively correspond to the two calibration points on the first target, and the imaging pixels of D1 and D2 also respectively have corresponding imaging pixel coordinate values on thefirst image 101. For example, the true coordinate values of D1, D2 on the first target may be recorded as (XD1 ,YD1 )、(XD1 ,YD2 ) The imaging pixel coordinate values thereof on the first image are noted as (X01 ,Y01 )、(X02 ,Y02 ). Obviously, for two-dimensional discretized digital images, eachThe coordinate values of the individual imaging pixels can be conveniently expressed by the number of pixel dots of the pixel in the X-axis direction and the number of scan lines in the Y-axis direction.
Similar to the calibration points D1, D2, two calibration points D3, D4 with a certain distance can also be found on thefirst image 102. The same steps may be used to determine the calibration points on the remaining first images, which are not described in detail herein.
Two marker points on the same first image can be selected at two ends as far as possible, for example, in the embodiment shown in fig. 8, the theoretical length of the image acquisition area of each imaging chip on the object plane is 11mm (along the X-axis direction), and considering that the image acquisition area of the adjacent imaging module has an overlapping scanning portion of 0.5mm, two marker points with a distance of about 10.5mm along the X-axis direction can be selected on the first target, and the two marker points can be ensured to be imaged on the same first image.
Obviously, before the first target is scanned or after the first target is scanned, the positions of the various calibration points of the first target should be recorded to acquire the real coordinate values thereof. In addition, in some preferred embodiments, the position of the first target may be corrected before the first target is scanned, and by using the correction, the connection line of each calibration point of the first target may be ensured to be parallel to the X-axis direction, that is, the true coordinate value of the Y-axis of the first target may be ensured to be the same, so that the rotation angle of each first image may be determined later.
After the above steps are performed, the determination of the correction information may be performed, and in the embodiment of the present application, the correction information may include:
the rotation angle of each first image relative to the first direction, the number of overlapping pixel points of any two adjacent first images along the first direction and the offset line number of each first image along the second direction.
The following describes the steps of determining the correction information in detail with reference to the specific embodiment.
(1) An angle of rotation of each first image relative to the first direction is determined.
In the assembly process of the array type image detection device, as the imaging chip is difficult to adjust to the horizontal direction in the first direction (namely the X-axis direction) in the process of attaching, the inclination angles of the imaging modules are different, so that the rotation angle of each first image relative to the first direction characterizes the inclination angle of each imaging module in the assembly process due to assembly tolerance and other reasons, and the influence caused by the inclination of the assembly angle of the imaging modules can be eliminated by acquiring the rotation angle of each first image and correcting the rotation angle before image splicing.
In an embodiment of the present application, the rotation angle of each first image with respect to the first direction is determined according to a ratio of a distance in the second direction on the first image of two calibration points on the first image to a distance in the first direction. Taking thefirst image 101 of FIG. 8 as an example, since the line that marks the points D1, D2 on the first object is parallel to the X-axis direction, the first object is marked by the following formula β1 =arctan[(Y02 -Y01 )/(X02 -X01 )]That is, the rotation angle beta of thefirst image 101 relative to the X-axis direction can be determined1 . Further, the same steps may be employed to continue determining the rotation angle β of thefirst image 102 with respect to the first direction2 And the rotation angle of the other first image with respect to the X-axis direction.
In some preferred embodiments, the imaging pixel coordinates of each first image may also be converted prior to calculating the rotation angle of each first image relative to the first direction, e.g., based on the calibration point D1, to maintain the X-axis coordinate values of each pixel on each first image unchanged, and the Y-axis coordinate values are subtracted by Y, respectively01 Thereby obtaining the converted imaging pixel coordinate values so as to facilitate the subsequent generation of correction information. After coordinate conversion, the imaging pixel coordinate values of the calibration points D1, D2, D3, D4 on thefirst images 101, 102 may be respectively represented by (X11 ,Y11 )、(X12 ,Y12 )、(X13 ,Y13 )、(X14 ,Y14 ) And (3) representing.
In some preferred embodiments, the imaging of the respective first imagesAfter the pixel coordinates are converted and the rotation angles of the first images relative to the first direction are obtained, the rotation angles can be further used for carrying out rotation correction on the first images, so that each first image is arranged according to an ideal angle, and the accuracy of the number of overlapping pixel points in the first direction and the accuracy of the offset number in the second direction generated by subsequent calculation are further improved. The imaging pixel coordinate values of the calibration points D1, D2, D3, D4 on the rotation correctedfirst images 101, 102 may be respectively represented by (X21 ,Y21 )、(X22 ,Y22 )、(X23 ,Y23 )、(X24 ,Y24 ). It is easily known that the rotated (X21 ,Y21 ) And (X) before rotation11 ,Y11 ) The following relationship is satisfied:
Figure BDA0004134381140000081
furthermore, (X)22 ,Y22 )、(X23 ,Y23 )、(X24 ,Y24 ) And (X)12 ,Y12 )、(X13 ,Y13 )、(X14 ,Y14 ) The same correspondence exists.
(2) Any two adjacent first images overlap pixel numbers along the first direction.
The number of overlapping pixels of any two adjacent first images along the first direction is related to the actual size of the area where the corresponding imaging module performs overlapping scanning, and in some preferred embodiments of the present application, the number of overlapping pixels may be determined by the following steps:
determining real coordinate values of two ends of each first image along the first direction, corresponding to the first target along the first direction, according to the two calibration points on each first image;
Determining the corresponding real overlapping length of the overlapping scanning part between each first image and the adjacent first image on the first target based on the real coordinate values;
the true overlap length is converted into the number of overlapping pixels of each first image and the adjacent first images along the first direction.
Specifically, taking fig. 8 as an example, for thefirst image 101, the X-axis true coordinate value F corresponding to the leftmost imaging pixel and the rightmost imaging pixel on the ruler can be determined by the following equations1 、F2
Figure BDA0004134381140000082
Wherein N is the number of pixels of the first image along the X-axis direction, and also corresponds to the number of imaging elements included on each imaging chip.
In some preferred embodiments, as described above, the values of F may be obtained after rotation correction of each first image, and thus F1 、F2 Can also be expressed as:
Figure BDA0004134381140000091
in the same way, the true coordinate value F of the X-axis corresponding to the leftmost imaging pixel and the rightmost imaging pixel of thefirst image 102 on the ruler can be obtained3 、F4
Further, the scale of the leftmost and rightmost positions of thefirst image 101 is simplified by using the leftmost side of thefirst image 101 as the origin reference, and can be identified by L, that is, the leftmost side of the first image corresponds to the X-axis coordinate value on the ruler as L1 =F1 -F1 =0, the rightmost side corresponds to the X-axis coordinate value on the ruler being L2 =F2 -F1 The leftmost side of thefirst image 102 corresponds to the X-axis coordinate value L on the ruler3 =F3 -F1 The coordinate value of the X-axis on the rightmost side corresponding to the ruler is L4 =F4 -F1
In this way it is possible to determine that the area of overlap between each first image and its neighboring first image corresponds to the true overlap length on the ruler. For example:the overlapping amount a of thefirst image 101 can be calculated1 Set to 0, starting with thefirst image 102, A thereof2 =L2 -L3 I.e. the rightmost L value of thefirst image 101 minus the leftmost L value of thefirst image 102. The a values of the other first images may be determined in the same manner, and will not be described in detail herein.
Finally, the true overlap length is converted into the number of overlapping pixels C along the first direction of each first image and the adjacent first images, for example, for thefirst image 102, the number of overlapping pixels C along the X-axis direction of thefirst image 1012 The calculation can be made by the following formula:
Figure BDA0004134381140000092
the number of overlapping pixels of each of the remaining first images and the adjacent first images in the first direction can be determined by the same method, and it is apparent that since A1 =0, thus C2 =0。
(3) Each first image is offset by a number of lines in the second direction.
In an embodiment of the present application, the number of offset lines of each first image along the second direction may be determined by:
Calculating the mean value of imaging pixel coordinate values of at least two calibration points on each first image along the second direction;
determining an offset row number maximum based on the average value;
and determining the offset line number of each first image along the second direction based on the average value and the maximum value of the offset line numbers.
Specifically, taking fig. 8 as an example, first, the amounts of shift B in the second direction of thefirst image 101 are obtained1 =AVERAGE(Y21 :Y22 ) Offset B of thefirst image 102 along the second direction2 =AVERAGE(Y23 :Y24 ) The B values of the other first images may be obtained by the same method.
Then, the maximum value is taken among the V values of the respective first images, and is denoted as MAX.
Finally, determining the offset line number along the second direction based on the MAX and the B value of each first image, for example: offset line number E of thefirst image 101 along the second direction1 =MAX-B1 Offset line number E of thesecond image 102 along the second direction2 =MAX-B2
Tables 1 and 2 below show the data values of thefirst image 101 and thefirst image 102 obtained by the correction information generation method according to the embodiment shown in fig. 8. Wherein beta, C and E are correction information corresponding to each first image.
TABLE 1
Figure BDA0004134381140000101
TABLE 2
Figure BDA0004134381140000102
The above is a detailed description of the method for generating correction information provided in the embodiments of the present application, and meanwhile, the present application further provides, by means of embodiments of the present application, an image stitching method, where the method uses the correction information to correct a scanned image and then performs a stitching operation.
Fig. 9 shows an implementation flow of the image stitching method provided in the present application, and as shown in fig. 9, the method includes the following steps:
acquiring correction information, wherein the correction information is generated by using the correction information generation method;
acquiring at least two second images which are arranged at intervals along the first direction, wherein the at least two second images are generated by scanning a second target, and the mode of scanning the second target is the same as that of scanning a first target;
correcting each second image using the correction information;
and splicing the corrected second images.
The second object is an object that needs to perform image acquisition and/or image testing, for example, a part that needs to perform surface detection, or a paper document that needs to identify specific content, etc. The array type image detection device is used for scanning the second target, a plurality of second images are obtained, and under the condition that the structure of the array type image detection device is not changed, the image acquisition area, the image imaging area and the like of each imaging module remain unchanged, so that the scanning mode of the second target is the same as that of the first target.
After the second target is scanned, at least two second images obtained by scanning can be corrected by using the correction information generating method, and the specific correction comprises the following steps:
performing rotation correction on the corresponding second image according to the rotation angle (namely, each beta value generated in the correction information generation method) of each first image relative to the first direction; clipping and correcting the two adjacent second images corresponding to any two adjacent first images according to the number of overlapping pixel points (namely, each C value generated in the correction information generation method) of the two adjacent first images along the first direction; and performing displacement correction on the corresponding second image according to the offset line number of each first image along the second direction (namely, each E value generated in the correction information generating method). The manner of correcting and stitching digital images using various correction information is well known to those skilled in the art and will not be described in detail herein.
Fig. 10 shows the result of correcting and stitching the original scanned images of the ruler in the embodiment shown in fig. 5a by using the image stitching method, where the ruler is used as the second target when correcting and stitching the scanned images of the ruler, and the first images, such as thefirst image 101 and thefirst image 102, are used as the second image for correction and stitching.
As can be seen from comparing the original scanned image shown in fig. 5a, the stitched image obtained by the conventional stitching method shown in fig. 5b, and the stitched image generated by the correction information generating method and the image stitching method of the present application shown in fig. 10, the correction information generating method provided in this embodiment obtains corresponding accurate correction information for angular deviation, increase and decrease in overlap amount, and positional deviation, which are additionally generated beyond the ideal correction amount due to assembly tolerance, specification difference, and the like of the imaging device, respectively, by comparing the difference between the actual positions of the respective calibration points included in the first target and the positions on the scanned image, and improves the existing image stitching method on the basis of this, thereby effectively improving the stitching accuracy of the plurality of scanned images obtained by using the array image detecting device.
The present application further provides an image stitching device according to an embodiment, as shown in fig. 11, and in some preferred embodiments, theimage stitching device 700 includes a correction information acquiring module, a second image acquiring module, a correction module, and a stitching module.
Specifically, the correction information acquisition module is used for acquiring correction information, wherein the correction information is generated by using the correction information generation method; the second image acquisition module is used for acquiring at least two second images which are arranged at intervals along the first direction, the at least two second images are generated by scanning a second target, and the mode of scanning the second target is the same as the mode of scanning the first target; the correction module is used for correcting each second image by using the correction information; and the splicing module is used for splicing the corrected second images.
The embodiment of the application also provides an image acquisition system which comprises an imaging unit and an image processing unit. Fig. 12 shows a side sectional view of the image acquisition system along the scanning sub-direction.
As shown in fig. 12, the imaging unit includes twoimaging arrays 1, 2 fixedly arranged at intervals along the Y-axis direction, each imaging array includes at least one imaging module for imaging an image located on an object plane on an image plane and acquiring as an electric signal, and any one imaging module included in each imaging array and at least one imaging module included in the other imaging array have overlapping image acquisition areas on the object plane in the first direction. Specific embodiments of the imaging unit are described in detail in the prior application CN 202211180946.1.
As shown in fig. 12, the image processing unit includes adata conversion module 51 and a data processing module 602', wherein the data processing module 602' further includes a preprocessing module, a correction information generating module, and an image stitching module. Thedata conversion module 51 converts the electric signal collected by each imaging module from an analog signal to a digital signal; the preprocessing module constructs a scanning image acquired by each imaging module based on the digital signals; the correction information generating module determines the correction information corresponding to each imaging module by using the correction information generating method; the image stitching module corrects and stitches the scanning images acquired by each imaging module by using the image stitching method, and generates a complete scanning image of the image positioned on the object plane.
While the foregoing is directed to embodiments of the present application, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (10)

1. A correction information generation method is characterized by comprising the following steps,
acquiring at least two first images which are arranged at intervals along a first direction, determining at least two standard points on each first image, wherein the at least two first images are generated by scanning a first target, and any two adjacent first images are provided with overlapping scanning parts;
determining correction information, wherein the correction information is at least one of the following information: the rotation angle of each first image relative to the first direction, the number of overlapping pixels of any two adjacent first images along the first direction, and the number of offset rows of each first image along a second direction perpendicular to the first direction.
2. The correction information generation method according to claim 1, characterized in that:
the rotation angle of each first image relative to the first direction is determined from the ratio of the distance of two calibration points on the first image in the second direction to the distance in the first direction on the first image.
3. The correction information generation method according to claim 1, wherein the number of overlapping pixel points of any two adjacent first images in the first direction is determined by:
determining real coordinate values of two ends of each first image along the first direction, corresponding to the first target along the first direction, according to at least two calibration points on each first image;
determining the corresponding real overlapping length of the overlapping scanning part between each first image and the adjacent first image on the first target based on the real coordinate values;
the true overlap length is converted into the number of overlapping pixels of each first image and the adjacent first images along the first direction.
4. The correction information generating method according to claim 1, wherein the number of offset lines of each first image in the second direction is determined by:
calculating the mean value of imaging pixel coordinate values of at least two calibration points on each first image along the second direction;
determining an offset row number maximum based on the average value;
and determining the offset line number of each first image along the second direction based on the average value and the maximum value of the offset line numbers.
5. The correction information generation method according to claim 1, characterized in that:
And before determining the number of overlapping pixels of any two adjacent first images along the first direction and the number of offset lines of each first image along the second direction, carrying out rotation correction on each first image according to the rotation angle of each first image relative to the first direction.
6. The correction information generation method according to claim 1, characterized by further comprising the step of:
recording the positions of the various calibration points of the first target; or,
the position of the first object is corrected before the first object is scanned.
7. An image stitching method is characterized by comprising the following steps:
acquiring correction information, wherein the correction information is generated using the correction information generation method according to claim 1;
acquiring at least two second images which are arranged at intervals along the first direction, wherein the at least two second images are generated by scanning a second target, and the mode of scanning the second target is the same as that of scanning a first target;
correcting each second image using the correction information;
and splicing the corrected second images.
8. The image stitching method according to claim 7, wherein the modifying each second image is at least one of:
Performing rotation correction on the corresponding second image according to the rotation angle of each first image relative to the first direction;
cutting and correcting the two adjacent second images corresponding to any two adjacent first images according to the number of overlapping pixel points of the two adjacent first images along the first direction; the method comprises the steps of,
and carrying out displacement correction on the corresponding second image according to the offset line number of each first image along the second direction.
9. An image stitching device, comprising:
a correction information acquisition module configured to acquire correction information, wherein the correction information is generated using the correction information generation method according to claim 1;
the second image acquisition module is used for acquiring at least two second images which are arranged at intervals along the first direction, the at least two second images are generated by scanning a second target, and the mode of scanning the second target is the same as the mode of scanning the first target;
a correction module for correcting each second image using the correction information;
and the splicing module is used for splicing the corrected second images.
10. An image acquisition system, characterized by:
comprises an imaging unit and an image processing unit;
The imaging unit comprises two imaging arrays fixedly arranged at intervals along a second direction, each imaging array comprises at least one imaging module used for imaging an image on an object plane on the image plane and acquiring the image as an electric signal, and the imaging module is used for acquiring the image on the object plane,
any one imaging module included in each imaging array and at least one imaging module included in the other imaging array are provided with overlapped image acquisition areas along a first direction;
the image processing unit comprises a data conversion module, a preprocessing module, a correction information generation module and an image splicing module;
the data conversion module converts the electric signals acquired by each imaging module from analog signals to digital signals;
the preprocessing module constructs a scanning image acquired by each imaging module based on the digital signals;
the correction information generation module determines the correction information corresponding to each imaging module using the correction information generation method of claim 1;
the image stitching module corrects and stitches the scanned images acquired by each imaging module by using the image stitching method of claim 7, and generates a complete scanned image of the image located on the object plane.
CN202310265652.7A2023-03-152023-03-15Correction information generation method, image stitching method and device and image acquisition systemPendingCN116309063A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202310265652.7ACN116309063A (en)2023-03-152023-03-15Correction information generation method, image stitching method and device and image acquisition system

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202310265652.7ACN116309063A (en)2023-03-152023-03-15Correction information generation method, image stitching method and device and image acquisition system

Publications (1)

Publication NumberPublication Date
CN116309063Atrue CN116309063A (en)2023-06-23

Family

ID=86802793

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310265652.7APendingCN116309063A (en)2023-03-152023-03-15Correction information generation method, image stitching method and device and image acquisition system

Country Status (1)

CountryLink
CN (1)CN116309063A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2025055723A1 (en)*2023-09-142025-03-20威海华菱光电股份有限公司Image stitching method and image scanning system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2025055723A1 (en)*2023-09-142025-03-20威海华菱光电股份有限公司Image stitching method and image scanning system

Similar Documents

PublicationPublication DateTitle
US6288801B1 (en)Self calibrating scanner with single or multiple detector arrays and single or multiple optical systems
US7692832B2 (en)Method for correcting scanner non-uniformity
CN110653489B (en) A fast calibration method for multi-galvo mirrors
CN110681990A (en) A galvanometer correction system and its correction method
US6556315B1 (en)Digital image scanner with compensation for misalignment of photosensor array segments
KR101545186B1 (en)method of correction of defect location using predetermined wafer image targets
JP2009068995A (en) Microarray device
CN116309063A (en)Correction information generation method, image stitching method and device and image acquisition system
US6600568B1 (en)System and method of measuring image sensor chip shift
US20030098998A1 (en)Image processing apparatus
US7265881B2 (en)Method and apparatus for measuring assembly and alignment errors in sensor assemblies
WO2025055723A1 (en)Image stitching method and image scanning system
US7778788B2 (en)Scanner based optical inspection system
JPH11351836A (en) Apparatus and method for detecting three-dimensional shape
JP2000175001A (en) Image data correction method in image reading device
JPH10311705A (en) Image input device
CN1603868A (en) Optical beam diagnostic device and method
US20040120017A1 (en)Method and apparatus for compensating for assembly and alignment errors in sensor assemblies
JPH05172531A (en) Distance measurement method
TW595209B (en)Method for adjusting position point of image-scanning module during optical correction
JPH05210732A (en) Apparatus and method for determining the geometry of an optical system
JP6544233B2 (en) Calibration method of image reading apparatus and image reading apparatus
JP2004177129A (en) Image misalignment measuring device, image misalignment measuring method, image misalignment measuring program, and storage medium storing image misalignment measuring program
Gruber et al.Novel high precision photogrammetric scanning
CN1154339C (en)Charge coupling element correction method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp