Movatterモバイル変換


[0]ホーム

URL:


CN112991185A - Automatic conjugation method for Dunhuang relic image - Google Patents

Automatic conjugation method for Dunhuang relic image
Download PDF

Info

Publication number
CN112991185A
CN112991185ACN202110440552.4ACN202110440552ACN112991185ACN 112991185 ACN112991185 ACN 112991185ACN 202110440552 ACN202110440552 ACN 202110440552ACN 112991185 ACN112991185 ACN 112991185A
Authority
CN
China
Prior art keywords
dunhuang
image
grid
suicide note
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110440552.4A
Other languages
Chinese (zh)
Other versions
CN112991185B (en
Inventor
张重生
侯亚新
莫伯峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University
Original Assignee
Henan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan UniversityfiledCriticalHenan University
Priority to CN202110440552.4ApriorityCriticalpatent/CN112991185B/en
Publication of CN112991185ApublicationCriticalpatent/CN112991185A/en
Application grantedgrantedCritical
Publication of CN112991185BpublicationCriticalpatent/CN112991185B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种敦煌遗书残片图像的自动缀合方法,包括步骤:A:获取基准线;B:获取基准线的位置坐标点;C:获取网格单元的宽度;D:获取网格单元的宽度值恢复到真实物理尺寸的缩放比例;E:获取边缘线图像;F:获取边缘线骨架图像;G:获取边缘线骨架标注图像;H:得到二维数值型的时间序列化数据;I:将基准线的位置坐标点、网格单元的宽度和二维时间序列化数据转化为恢复到真实物理尺寸后的对应数据;J:对恢复到真实物理尺寸后的对应数据进行归一化处理;K:计算两幅敦煌遗书残片图像的时间序列匹配度;L:返回时间序列匹配度最高的图像作为缀合度较高的备选图像。本发明极大地提高了敦煌遗书残片图像缀合效率和准确性。

Figure 202110440552

The invention discloses an automatic conjugation method for images of Dunhuang suicide note fragments, comprising the steps of: A: obtaining a reference line; B: obtaining a position coordinate point of the reference line; C: obtaining the width of a grid unit; D: obtaining a grid unit The width value of the image is restored to the scaling ratio of the real physical size; E: Obtain the edge line image; F: Obtain the edge line skeleton image; G: Obtain the edge line skeleton annotation image; H: Obtain the two-dimensional numerical time series data; I : Convert the position coordinate points of the baseline, the width of grid cells and the two-dimensional time-serialized data into the corresponding data restored to the real physical size; J: Normalize the corresponding data restored to the real physical size ; K: Calculate the time series matching degree of the two Dunhuang posthumous fragments images; L: Return the image with the highest time series matching degree as a candidate image with a higher degree of conjugation. The present invention greatly improves the efficiency and accuracy of image conjugation of Dunhuang suicide note fragments.

Figure 202110440552

Description

Automatic conjugation method for Dunhuang relic image
Technical Field
The invention relates to an image splicing method for a broken article, in particular to an automatic conjugation method for Dunhuang relic images.
Background
Dunhuang relics are important research data for researching the history, archaeology, religion, anthropology, sociology, linguistics, cultural history, artistic history, scientific history and national history of China, east Asia and south Asia in the ancient period, have extremely high cultural relic value and literature research value, and the residual images of the Dunhuang relics are main materials for researching the Dunhuang relics. In the professional field of Dunhuang study, the primary side of Dunhuang copy remains image is the book border, which is the edge of Dunhuang copy remains paper in Dunhuang copy remains image, and the broken edge is not naturally formed, which is the edge formed by the damaged Dunhuang copy remains paper. The Dunhuang left-reading book has transverse grid lines and vertical grid lines for the book, which are the horizontal and vertical alignment straight lines drawn by the writer on the Dunhuang left-reading book, and the Dunhuang left-reading book has relatively complete transverse grid lines if the upper and/or lower part of the left-reading book has the book border.
In the existing research process of Dunhuang handbook, researchers usually use domain professional knowledge to manually conjugate Dunhuang handbook, and through continuously analyzing the tightness of the broken edge and context of two Dunhuang handbook remnants, whether the two Dunhuang handbook remnants belong to the same place before being damaged is judged. The manual conjugation methods described above are less accurate and efficient and work intensive.
Disclosure of Invention
The invention aims to provide an automatic conjugation method of Dunhuang relic images, which can give consideration to the tightness of broken edge and slag openings and the accuracy of the width of a grid unit formed after the broken edge conjugation, and greatly improve the efficiency and the accuracy of the Dunhuang relic image conjugation.
The invention adopts the following technical scheme:
an automatic conjugation method of Dunhuang relic images comprises the following steps:
a: manually determining reference lines at the upper side, the lower side, the left side and the right side of the Dunhuang relic image and a middle reference line adjacent to the reference line at the left side to obtain a Dunhuang relic reference image;
b: the positions of a position coordinate point U of an upper datum line, a position coordinate point D of a lower datum line, a position coordinate point L of a left datum line, a position coordinate point R of a right datum line and a position coordinate point M of a middle datum line in the Dunhuang relic reference image obtained in the step A are positioned by a computer;
c: utilizing a computer to calculate the width of the grid unit in the standard reference image of the Dunhuang relic;
d: selecting a Dunhuang relic image with known real physical size and acquiring the real physical width of the grid unit, obtaining the width of the grid unit of the Dunhuang relic benchmark reference image corresponding to the Dunhuang relic image according to the steps A to C, and calculating the width value of the grid unit of the Dunhuang relic benchmark reference image corresponding to the Dunhuang relic image to restore to the scaling ratio gamma of the real physical size; wherein γ is β2Beta is the multiple relation between the width value of the grid unit of the Dunhuang relic film image and the real physical width value of the grid unit of the Dunhuang relic film;
e: carrying out edge detection on the Dunhuang relic image to extract edge lines of the Dunhuang relic image so as to obtain edge line images corresponding to each Dunhuang relic image;
f: utilizing a computer to obtain an edge line framework in an edge line image corresponding to each Dunhuang copy book residue image, and obtaining an edge line framework image corresponding to each Dunhuang copy book residue image, wherein the edge line framework refers to a central pixel point in an edge line;
g: manually determining the left side and the right side broken edge part in the edge line skeleton image of each Dunhuang relic image to obtain an edge line skeleton labeling image corresponding to each Dunhuang relic image;
h: g, respectively carrying out time serialization processing on the left side and the right side broken edge parts of the edge line skeleton in the edge line skeleton labeling image obtained in the step G to obtain corresponding two-dimensional numerical value type time serialization data;
i: using the product obtained in step DScaling the proportion gamma and the multiple relation beta, and enabling the position coordinate point L of the datum line obtained in the step B to be as follows: (l)x,ly)、M:(mx,my)、R:(rx,ry)、U:(ux,uy) And D: (d)x,dy) And converting into a position coordinate point L' of the reference line restored to the true physical size: (l'x,l′y)、M′:(m′x,m′y)、R′:(r′x,r′y)、U′:(u′x,u′y) And D': (d'x,d′y) (ii) a Then the width G of the grid unit obtained in the step CwConverted into the width G 'of the mesh unit restored to the real physical size'w(ii) a And D, two-dimensional time-series data T corresponding to the broken edge parts on the left side and the right side of the edge line skeleton obtained in the step H are subjected tolAnd TrRespectively converting the two-dimensional time-series data into two-dimensional time-series data T 'corresponding to broken edge parts on the left side and the right side of the edge line skeleton after the actual physical dimensions are restored'lAnd T'r;T′l={(V′l1,W′l1),(V′l2,W′l2),(V′l3,W′l3),…,(V′li,W′li)},T′r={(V′r1,W′r1),(V′r2,W′r2),(V′r3,W′r3),…,(V′ri,W′ri) I is a positive integer, (V'li,W′li) And (V'ri,W′ri) Respectively representing the pixel positions of ith pixel data of the left and right broken edge parts restored to the real physical size;
j: two-dimensional time-series data T 'obtained in step I'lV 'of'liAnd T'rV 'of'riAnd a position coordinate point L' of the reference line: (l'x,l′y) And R': (r'x,r′y) Respectively carrying out normalization processing to correspondingly obtain normalized time-series edge curve data TlAnd T ″)rAnd normalized reference line relative position coordinates L ″: (l ″)x,l″y) And R': (r ″)x,r″y),T″l={(V″l1,W″l1),(V″l2,W′l2),(V″l3,W″l3),…,(V″li,W″li)},T″r={(V″r1,W″r1),(V″r2,W″r2),(V″r3,W″r3),…,(V″ri,W″ri)};
K: respectively obtaining the time-sequenced edge curve data T' of the broken edge parts of the edge line skeletons of the two Dunhuang relic film images to be conjugated according to the step J after normalization treatmentlAnd T ″)rThen calculating the time sequence matching degree S of the two Dunhuang relic film images, and putting the time sequence matching degree S into a set S;
l: for each Dunhuang left-reading image a, the time sequence matching degree is calculated in turn according to the method in the step K; and finally, sorting the images from large to small according to the time sequence matching degree values, and returning the front H images with the highest similarity to the Dunhuang relic image a by taking the priority of smaller sliding distance if the time sequence matching degrees are the same as each other as alternative images with higher conjugation degree with the Dunhuang relic image a.
In the step A: firstly, regarding Dunhuang relic images with book boundaries on the upper parts, a horizontal line of a first color with the width of 1 pixel is drawn at a book horizontal grid line on the upper side of the Dunhuang relic images by using a pixel pen as an upper reference line; for Dunhuang copy remains image with book boundary at the lower part, drawing a transverse line of a first color with the width of 1 pixel at the transverse grid line of the book at the lower side of the Dunhuang copy remains image by using a pixel pen as a lower side reference line;
then, drawing a vertical line of a second color with the width of 1 pixel as a left reference line by using a pixel pen at the position of a book vertical grid line which is closest to the left side broken edge and is not interrupted of the Dunhuang relic image, and drawing a vertical line of a second color with the width of 1 pixel as a right reference line by using a pixel pen at the position of a book vertical grid line which is closest to the right side broken edge and is not interrupted of the Dunhuang relic image;
and finally, judging whether a vertical grid line of the book, except the right reference line, is adjacent to the right side of the left reference line in the Dunhuang relic image, if so, drawing a vertical line of a second color with the width of 1 pixel at the vertical grid line of the book by using a pixel pen as a middle reference line.
In the step B: in the invention, the initial coordinates of the position coordinate point U, D, L, M and R of the datum line are both (0, 0), then all the pixel positions of the pixel data which accord with the pixel value of the second color are sequentially extracted from left to right on the horizontal straight line passing through the midpoint of the standard reference image of the Tonhuang relic in the vertical direction by utilizing the color characteristics, and if the pixel positions of the two pixel data are extracted, the pixel positions are sequentially stored as L: (l)x,ly) And R: (r)x,ry) If the pixel positions of the three pixel data are extracted, sequentially storing the pixel positions as L: (l)x,ly)、M:(mx,my) And R: (r)x,ry);
On the vertical straight line passing through the midpoint of the Dunhuang relic reference image in the horizontal direction, sequentially extracting the pixel positions of all pixel data conforming to the pixel value of the first color from top to bottom, and if the pixel positions of two pixel data are extracted, saving the pixel positions as U: (u)x,uy) And D: (d)x,dy) If only one pixel position of the pixel data is extracted, judging whether the pixel position of the pixel data is positioned on the upper part of the Dunhuang relic reference image, if so, saving the pixel position as U: (u)x,uy) Otherwise, saving as D: (d)x,dy)。
In the step C: according to the position coordinate point L of the left reference line in the Dunhuang relic reference image obtained in the step B: (l)x,ly) And a position coordinate point M of the middle datum line: (m)x,my) And the position coordinate point R of the right reference line: (r)x,ry) If the position coordinate point M of the middle reference line is (0, 0), the width G of the grid cellw=rx-lxOtherwise Gw=mx-lx
In the step F: enhancing the edge line skeleton in the edge line image according to the pixel threshold Q for the obtained edge line image corresponding to each Dunhuang relic film image, and setting the non-edge line skeleton as the background to obtain the edge line skeleton image corresponding to each Dunhuang relic film image; and E, the edge line skeleton refers to a centered pixel point in the edge line with the width of 3 pixels obtained in the step E.
In the step H: respectively extracting the pixel positions of each pixel data of the broken edge parts on the left side and the right side of the edge line skeleton in the edge line skeleton labeling image obtained in the step G according to the sequence from top to bottom and from left to right, and then sequentially combining the pixel positions of the pixel data obtained in sequence to respectively form two-dimensional time-series data T corresponding to the broken edge part on the left side of the edge line skeletonlTwo-dimensional time-series data T corresponding to right-side broken edge portionrWherein, Tl={(Vl1,Wl1),(Vl2,Wl2),(Vl3,Wl3),…,(Vli,Wli)},Tr={(Vr1,Wr1),(Vr2,Wr2),(Vr3,Wr3),…,(Vri,Wri) I is a positive integer, (V)li,Wli) (V) pixel position of ith pixel data indicating a broken edge portion on the left side of the edge line skeletonri,Wri) And a pixel position of the ith pixel data of the broken edge part at the right side of the edge line skeleton.
In the step I: when the position coordinate points L, M, R, U and D of the reference line are converted into the position coordinate points L ', M ', R ', U ', and D ' of the reference line restored to the true physical size, the position coordinate point L of the reference line is: (l)x,ly) Abscissa l of (5)xDo an operationlxL is obtained from'xOrdinate lyDo operation lyL is obtained from'yAnd finally obtaining the position coordinate point L' of the datum line restored to the real physical size: (l'x,l′y) (ii) a Similarly, obtaining position coordinate points M' of the other reference lines: (m'x,m′y)、R′:(r′x,r′y)、U′:(u′x,u′y) And D': (d'x,d′y);
After grid cell width GwConverted to grid cell width G 'restored to true physical dimensions'wThen, the grid cell width G is setwDo operation GwBeta to give G'w
In the two-dimensional time-series data TlAnd TrConversion into two-dimensional time-sequenced data T'lAnd T'rTime, two-dimensional time-series data TlV inliDo operation VliV is obtained from'li,WliDo operation WliW is obtained from'liTwo-dimensional time-serialized data TrV inriDo operation VriV is obtained from'ri,WriDo operation WriW is obtained from'ri
Step J: in the normalization process, T 'is calculated first'lMiddle V'liMin (V'li) And T'rMiddle V'riMin (V'ri) Then two-dimensional time-serialized data T'lV 'of each data of'liAnd L ' in the position coordinate point L ' of the reference line 'xAll subtract min (V'li) Two-dimensional time-serialized data T'rV 'of each data'riAnd R ' in the position coordinate point R ' of the reference line 'xAll subtract min (V'ri) Respectively obtaining normalized time-series edge curve data T ″)lAnd T ″)rAnd the position coordinate point L ″ of the reference line: (l ″)x,l″y) And R': (r ″)x,r″y) (ii) a Wherein, W ″)li=W′li,W″ri=W′ri,l″y=l′y,r″y=r′y
Step K: when calculating the time sequence matching degree of two Dunhuang relic images, firstly judging whether the upper part and/or the lower part of the Dunhuang relic images a and b have book boundaries:
if there is a book border on the upper and/or lower part of the dunhuang left-reading image a, and there is a book border on the upper and/or lower part of the dunhuang left-reading image b, the Dunhuang left image a and the Dunhuang left image b are respectively put into the left side and the right side of the virtual raster image, so that the transverse grid lines of the book existing in the Dunhuang left image a and the Dunhuang left image b are respectively aligned with the corresponding transverse grid lines in the virtual raster image, the horizontal grid lines of the books on the middle upper part and the lower part of the Dunhuang relic image a are respectively aligned with the horizontal grid lines on the middle upper part and the lower part of the virtual raster image, the horizontal grid lines of the books on the middle upper part and the lower part of the Dunhuang relic image b are respectively aligned with the horizontal grid lines on the middle upper part and the lower part of the virtual raster image, and the reference lines on the left side and the right side of the Dunhuang relic image a and the left side and the right side of the Dunhuang relic image b are respectively aligned with the vertical grid; keeping Dunhuang relic image a and b fixed in the virtual raster image, respectively calculating the normalized time-series edge curve data T ″' corresponding to Dunhuang relic image a and braAnd T ″)lbPartial curves T ″, which coincide in the vertical directionrasAnd T ″)lbsThe time sequence matching degree s between the images, the maximum distance d between the right side broken edge part of the Dunhuang relic film image a and the right side reference line of the Dunhuang relic film image a after restoring to the real physical sizeaAnd the minimum distance d between the broken edge part at the left side of the Dunhuang relic image b and the left reference line after the Dunhuang relic image b is restored to the real physical sizebRestoring each Dunhuang relic film image obtained in combination with the step I to the grid unit width G 'after the real physical size'wIf mod ((d)a+db),G′w) If more than N, the matching degree s of the time sequence is calculated by s multiplied by N to obtain s', and thenPutting the time sequence matching degree S' into the set S, otherwise, directly putting the time sequence matching degree S into the set S; finally, taking the maximum value in the set S as the maximum stitching degree between the Dunhuang relic images a and b;
if the upper part and/or the lower part of the Dunhuang left-handed book remnant image a has a book border and the Dunhuang left-handed book remnant image b has no book border, respectively placing the Dunhuang left-handed book remnant image a and the Dunhuang left-handed book remnant image b on the left side and the right side in the virtual raster image, respectively aligning the transverse grid lines of the book on the upper part and/or the lower part of the Dunhuang left-handed book remnant image a with the corresponding transverse grid lines in the virtual raster image, and respectively aligning the left-side reference lines and the right-side reference lines in the Dunhuang left-handed book remnant image a and the right-side reference lines in the Dunhuang left; keeping the Dunhuang relic image a fixed in the virtual raster image, and successively carrying out upper alignment and lower alignment on the broken edge parts of the Dunhuang relic images a and b; after the upper alignment and the lower alignment, the Dunhuang relic book remnant image b slides upwards in the virtual raster image in the set sliding range along the vertical direction by taking M pixels as a stride, then returns to the initial position, and finally slides downwards in the set sliding range along the vertical direction; after the initial position and each sliding of the Dunhuang relic image b, the normalized time-sequence edge curve data T' corresponding to the two Dunhuang relic images a and b is calculatedraAnd T ″)lbPartial curves T ″, which coincide in the vertical directionrasAnd T ″)lbsThe time sequence matching degree s between the images, the maximum distance d between the right side broken edge part of the Dunhuang relic film image a and the right side reference line of the Dunhuang relic film image a after restoring to the real physical sizeaAnd the minimum distance d between the broken edge part at the left side of the Dunhuang relic image b and the left reference line after the Dunhuang relic image b is restored to the real physical sizebRestoring each Dunhuang relic film image obtained in combination with the step I to the grid unit width G 'after the real physical size'wIf mod ((d)a+db),G′w) If the time sequence matching degree S is more than N, calculating the time sequence matching degree S by S multiplied by N to obtain S ', then putting the time sequence matching degree S' into the set S, or directly putting the time sequence matching degree S into the set SIn the set S; finally, taking the maximum value in the set S as the maximum stitching degree between the Dunhuang relic images a and b;
if the Dunhuang left-hand book remnant image a has no book border, and the Dunhuang left-hand book remnant image b has a book border on the upper part and/or the lower part thereof, the Dunhuang left-hand book remnant image a and the Dunhuang left-hand book remnant image b are respectively placed on the left side and the right side of the virtual raster image, and the horizontal grid lines of the book on the upper part and/or the lower part of the Dunhuang left-hand book remnant image b are respectively aligned with the corresponding horizontal grid lines in the virtual raster image, and the left-side and right-side reference lines in the Dunhuang left-hand book remnant image a and the Dunhuang left-hand book remnant image b are; keeping the Dunhuang relic image b fixed in the virtual raster image, and successively carrying out upper alignment and lower alignment on the broken edge parts of the Dunhuang relic image a and the Dunhuang relic image b; after the upper alignment and the lower alignment, the Dunhuang relic book remnant image a slides upwards in the virtual raster image in the set sliding range along the vertical direction by taking M pixels as a stride, then returns to the initial position, and finally slides downwards in the set sliding range along the vertical direction; after the initial position and each sliding of the Dunhuang relic image a, the normalized time-series edge curve data T' corresponding to the two Dunhuang relic images a and b is calculatedraAnd T ″)lbPartial curves T ″, which coincide in the vertical directionrasAnd T ″)lbsThe time sequence matching degree s between the images, the maximum distance d between the right side broken edge part of the Dunhuang relic film image a and the right side reference line of the Dunhuang relic film image a after restoring to the real physical sizeaAnd the minimum distance d between the broken edge part at the left side of the Dunhuang relic image b and the left reference line after the Dunhuang relic image b is restored to the real physical sizebRestoring each Dunhuang relic film image obtained in combination with the step I to the grid unit width G 'after the real physical size'wIf mod ((d)a+db),G′w) If the time sequence matching degree S is more than N, calculating the time sequence matching degree S by S multiplied by N to obtain S ', and then putting the time sequence matching degree S' into the set S, otherwise, directly putting the time sequence matching degree S into the set S; finally, the maximum value in the set S is used as the value between the Dunhuang relic images a and bMaximum degree of conjugation;
if the Dunhuang left and right images a and b have no book borders, respectively placing the Dunhuang left and right images a and b into the left and right sides of the virtual raster image, and respectively aligning the left and right reference lines in the Dunhuang left and right images a and b with the vertical grid lines in the virtual raster image; the broken edge parts of the Dunhuang left book images a and b are aligned up and down in sequence; after the upper alignment and the lower alignment, keeping the Dunhuang left book image a fixed in the virtual raster image, firstly sliding the Dunhuang left book image b upwards in the virtual raster image in a set sliding range along the vertical direction by taking M pixels as a stride, then returning to the initial position, and finally sliding downwards in the set sliding range along the vertical direction; after the initial position and each sliding of the Dunhuang relic image b, the normalized time-sequence edge curve data T' corresponding to the two Dunhuang relic images a and b is calculatedraAnd T ″)lbPartial curves T ″, which coincide in the vertical directionrasAnd T ″)lbsThe time sequence matching degree s between the images, the maximum distance d between the right side broken edge part of the Dunhuang relic film image a and the right side reference line of the Dunhuang relic film image a after restoring to the real physical sizeaAnd the minimum distance d between the broken edge part at the left side of the Dunhuang relic image b and the left reference line after the Dunhuang relic image b is restored to the real physical sizebRestoring each Dunhuang relic film image obtained in combination with the step I to the grid unit width G 'after the real physical size'wIf mod ((d)a+db),G′w) If the time sequence matching degree S is more than N, calculating the time sequence matching degree S by S multiplied by N to obtain S ', and then putting the time sequence matching degree S' into the set S, otherwise, directly putting the time sequence matching degree S into the set S; finally, taking the maximum value in the set S as the maximum stitching degree between the Dunhuang relic images a and b;
the virtual raster image is a blank image designed manually, the horizontal length of the virtual raster image is not less than the sum of the horizontal lengths of two Dunhuang relic images a and b to be judged whether to be conjugated or not, the vertical height is not less than the greater of the vertical heights of the two dunghuang relic images a and b to be judged whether or not to be able to be conjugated, and the virtual grid image is internally and uniformly provided with grids, the width of each grid is equal to the width of a grid unit after the Dunhuang relic image is restored to the real physical size, the height of each grid is equal to the distance between the transverse grid lines of the upper part and the lower part of the book after the complete document image is restored to the real physical size, M is 1, mod () is a remainder function, N is a grid width threshold value, N is a penalty coefficient, and the sliding range is that after the broken edge parts of the Dunhuang relic image a and b are aligned up and down, the broken edge parts are aligned from the upper end P pixel to the lower end P pixel of the alignment point.
The step K comprises the following specific steps:
k0: judging whether the upper part and/or the lower part of the two Dunhuang relic images a and b to be judged whether to be conjugated have book boundaries, and entering a step K1 if the Dunhuang relic images a and b do not have book boundaries; if there is a book border only on the upper and/or lower part of the Dunhuang left book remnant image a, proceed to step K2; if there is a book border only on the upper and/or lower part of the dunhuang left-hand book remnant image b, the process proceeds to step K3, if there is a book border on the upper and/or lower part of the dunhuang left-hand book remnant image a, and the process proceeds to step K4;
k1: creating a virtual raster image, respectively placing the Dunhuang relic images a and b into the left side and the right side in the virtual raster image, respectively aligning the reference lines of the left side and the right side in the Dunhuang relic images a and b with the vertical grid lines in the virtual raster image, and aligning the upper end point of the broken edge part of the Dunhuang relic image b with the upper end point of the broken edge part of the Dunhuang relic image a; then entering step K5;
k2: creating a virtual raster image, respectively placing the Dunhuang relic images a and b on the left side and the right side in the virtual raster image, respectively aligning the transverse grid lines of the books on the upper part and/or the lower part of the Dunhuang relic image a with the corresponding transverse grid lines in the virtual raster image, respectively aligning the reference lines on the left side and the right side in the Dunhuang relic images a and b with the vertical grid lines in the virtual raster image, and aligning the upper end point of the broken edge part of the Dunhuang relic image b with the upper end point of the broken edge part of the Dunhuang relic image a; then entering step K5;
k3: creating a virtual raster image, respectively placing the Dunhuang relic images a and b on the left side and the right side in the virtual raster image, respectively aligning the transverse grid lines of the books on the upper part and/or the lower part of the Dunhuang relic image b with the corresponding transverse grid lines in the virtual raster image, respectively aligning the reference lines on the left side and the right side in the Dunhuang relic images a and b with the vertical grid lines in the virtual raster image, and aligning the upper end point of the broken edge part of the Dunhuang relic image a with the upper end point of the broken edge part of the Dunhuang relic image b; then entering step K5;
k4: creating a virtual raster image, respectively placing the Dunhuang relic images a and b into the left side and the right side in the virtual raster image, respectively aligning the transverse grid lines of the books existing in the Dunhuang relic images a and b with the corresponding transverse grid lines in the virtual raster image, and respectively aligning the reference lines of the left side and the right side in the Dunhuang relic images a and b with the vertical grid lines in the virtual raster image; then entering step K5;
k5: under the current position of Dunhuang relic image b in virtual raster image, calculating the normalized time-sequence marginal curve data T' corresponding to Dunhuang relic image a and braAnd T ″)lbPartial curves T ″, which coincide in the vertical directionrasAnd T ″)lbsThe time series matching degree s between the two, and then the step K6 is carried out;
k6: the normalized time-series edge curve data T ″raAnd T ″)lbThe respective head and tail end points are directly connected to form the line length LaAnd Lb,max(La,Lb) The greater of the two; then T ″', andlbsthe head and tail end points of the line are directly connected, so that the length of the formed line is Lc(ii) a If L iscIf the length is larger than or equal to the length threshold, the step K7 is carried out; otherwise, go to step K8;
k7: first, a sub-curve T ″, is calculatedrasAnd T ″)lbsTime series matching degree s between: calculating a sub-curve T ″rasAnd T ″)lbsForming a difference array d by the obtained data differences in sequence according to the data differences of the abscissa of each corresponding position, and recording the number of elements with the median of the difference array d being less than or equal to the difference threshold as tc(ii) a Combining the length L of the line segment obtained in the step K6cAnd calculating to obtain the matching degree s of the time series as tc/Lc
Then calculating the maximum distance d between the right side broken edge part of the Dunhuang relic film image a and the right side reference line of the Dunhuang relic film image a after restoring to the real physical sizeaAnd the minimum distance d between the broken edge part at the left side of the Dunhuang relic image b and the left reference line after the Dunhuang relic image b is restored to the real physical sizeb
Finally, restoring each Dunhuang relic film image obtained in combination with the step I to the grid unit width G 'after the real physical size'wIf mod ((d)a+db),G′w) If the time sequence matching degree S is more than N, calculating the time sequence matching degree S by S multiplied by N to obtain S ', and then putting the time sequence matching degree S' into the set S, otherwise, directly putting the time sequence matching degree S into the set S; then entering step K9;
k8: sub-curve T ″)rasAnd T ″)lbsSetting the time sequence matching degree S between the sets as 0, and putting the value of S into a set S; then entering step K9;
k9, if there is book boundary on the upper and/or lower part of Dunhuang left-reading image a and there is book boundary on the upper and/or lower part of Dunhuang left-reading image b, then entering step L; if there is a book border only on the upper and/or lower part of the Dunhuang left book remnant image b, proceed to step K13; if the Dunhuang left book remnant image b has no book boundary, then go to step K10;
k10: sliding the Dunhuang left-reading image b upwards and downwards in the virtual raster image by taking 1 pixel as a stride and taking the upper end point of the broken edge part of the Dunhuang left-reading image a as a reference point respectively, wherein the sliding range does not exceed the upper end point and the lower end point of the broken edge part of the Dunhuang left-reading image a by P pixels; repeating the steps K5 to K8 after each movement until the Dunhuang relic image b slides to the boundary of the sliding range in the virtual raster image, namely the upper end point of the broken edge part of the image b is respectively superposed with two pixel points at the upper end and the lower end of the sliding range; then entering step K11;
k11: maintaining the Dunhuang left and right images a and b on the left and right sides of the virtual raster image, respectively, aligning the reference line positions of the left and right sides of the Dunhuang left and right images a and b with the vertical grid lines of the virtual raster image, aligning the lower end point of the broken edge part of the Dunhuang left and right images a with the lower end point of the broken edge part of the Dunhuang left and right images b, and sequentially performing the steps K5 to K8; after the step K8 is finished, directly entering the step K12;
k12: taking 1 pixel as a stride, sliding the Dunhuang left-reading image b upwards and downwards in the virtual raster image by taking the lower end point of the broken edge part of the Dunhuang left-reading image a as a reference point respectively, and the sliding range does not exceed the upper and lower P pixels of the lower end point of the broken edge part of the Chinese character image a; repeating the steps K5 to K8 after each movement until the Dunhuang relic image b slides to the boundary of the sliding range in the virtual raster image, namely the lower end point of the broken edge part of the image b is respectively superposed with two pixel points at the upper end and the lower end of the sliding range; then entering a step L;
k13: taking 1 pixel as a stride, sliding the Dunhuang left-reading image a upwards and downwards respectively in the virtual raster image by taking the upper endpoint of the broken edge part of the Dunhuang left-reading image b as a reference point, wherein the sliding range does not exceed the upper endpoint of the broken edge part of the Chinese character image b and P pixels are arranged above and below the upper endpoint of the broken edge part of the Chinese character image b, repeating the steps K5 to K8 after each movement until the Dunhuang left-reading image a slides to the boundary of the sliding range in the virtual raster image, namely the upper endpoint of the broken edge part of the image a is respectively superposed with two pixel points at the upper end and the lower end of the sliding range; then entering step K14;
k14: maintaining the Dunhuang left and right images a and b on the left and right sides of the virtual raster image, aligning the reference line positions of the left and right sides of the Dunhuang left and right images a and b with the vertical grid lines in the virtual raster image, aligning the book borders on the upper and lower parts of the Dunhuang left and right images b with the horizontal grid lines on the upper and lower parts of the virtual raster image, aligning the lower end point of the broken edge part of the Dunhuang left and right images a with the lower end point of the broken edge part of the Dunhuang left and right images b, and sequentially executing the steps K5 to K8; after the step K8 is finished, directly entering the step K15;
k15, sliding the Dunhuang left-reading image a upwards and downwards in the virtual raster image with the lower end point of the broken edge part of the Dunhuang left-reading image b as the reference point, and the sliding range does not exceed the upper end point of the broken edge part of the Chinese character image b and the upper and lower P pixels respectively; repeating the steps K5 to K8 after each movement until the Dunhuang relic image a slides to the boundary of the sliding range in the virtual raster image, namely the upper end point of the broken edge part of the image a is respectively superposed with two pixel points at the upper end and the lower end of the sliding range; then step L is entered.
The invention can give consideration to the tightness of the broken edge slag notch and the accuracy of the width of the grid unit formed after the broken edge conjugation, greatly improves the efficiency and the accuracy of the conjugation of the Dunhuang relic images, and effectively utilizes the information of the transverse grid lines and the vertical grid lines of the books in the Dunhuang relic images by putting the two Dunhuang relic images a and b to be judged whether the images can be conjugated into the provided virtual grid images, thereby reducing the error.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The invention is described in detail below with reference to the following figures and examples:
as shown in fig. 1, the automatic conjugation method of the images of the relic of the dunhuang book of the invention comprises the following steps:
a: manually determining reference lines at the upper side, the lower side, the left side and the right side of the Dunhuang relic image and a middle reference line adjacent to the reference line at the left side to obtain a Dunhuang relic reference image;
firstly, regarding Dunhuang relic images with book boundaries on the upper parts, a horizontal line of a first color with the width of 1 pixel is drawn at a book horizontal grid line on the upper side of the Dunhuang relic images by using a pixel pen as an upper reference line; for Dunhuang copy remains image with book boundary at the lower part, drawing a transverse line of a first color with the width of 1 pixel at the transverse grid line of the book at the lower side of the Dunhuang copy remains image by using a pixel pen as a lower side reference line;
then, drawing a vertical line of a second color with the width of 1 pixel as a left reference line by using a pixel pen at the position of a book vertical grid line which is closest to the left side broken edge and is not interrupted of the Dunhuang relic image, and drawing a vertical line of a second color with the width of 1 pixel as a right reference line by using a pixel pen at the position of a book vertical grid line which is closest to the right side broken edge and is not interrupted of the Dunhuang relic image;
and finally, judging whether a vertical grid line of the book, except the right reference line, is adjacent to the right side of the left reference line in the Dunhuang relic image, if so, drawing a vertical line of a second color with the width of 1 pixel at the vertical grid line of the book by using a pixel pen as a middle reference line.
In the invention, the first color adopts green, and the second color adopts red;
b: the positions of a position coordinate point U of an upper datum line, a position coordinate point D of a lower datum line, a position coordinate point L of a left datum line, a position coordinate point R of a right datum line and a position coordinate point M of a middle datum line in the Dunhuang relic reference image obtained in the step A are positioned by a computer;
in the invention, the initial coordinates of the points U, D, L, M and R are both (0, 0), then all the pixel positions of the pixel data which accord with the second color pixel value are sequentially extracted from left to right on the horizontal straight line passing through the midpoint of the Dunhuang relic reference image in the vertical direction by utilizing the color characteristics, and if the pixel positions of the two pixel data are extracted, the pixel positions are sequentially stored as L: (l)x,ly) And R: (r)x,ry) If the pixel positions of the three pixel data are extracted, sequentially storing the pixel positions as L: (l)x,ly)、M:(mx,my) And R: (r)x,ry). In the present invention, the pixel value of the second color is (255, 0, 0).
On the vertical straight line passing through the midpoint of the Dunhuang relic reference image in the horizontal direction, sequentially extracting the pixel positions of all pixel data conforming to the pixel value of the first color from top to bottom, and if the pixel positions of two pixel data are extracted, saving the pixel positions as U: (u)x,uy) And D: (d)x,dy) If only one pixel position of the pixel data is extracted, judging whether the pixel position of the pixel data is positioned on the upper part of the Dunhuang relic reference image, if so, saving the pixel position as U: (u)x,uy) Otherwise, saving as D: (d)x,dy). In the present invention, the pixel value of the first color is (0, 255, 0).
To this end, the position of the upper datum line is a horizontal straight line passing through point U, the position of the lower datum line is a horizontal straight line passing through point D, the position of the left datum line is a vertical straight line passing through point L, the position of the right datum line is a vertical straight line passing through point R, and the position of the middle datum line is a vertical straight line passing through point M.
C: utilizing a computer to calculate the width of the grid unit in the standard reference image of the Dunhuang relic;
according to the position coordinate point L of the left reference line in the Dunhuang relic reference image obtained in the step B: (l)x,ly) And a position coordinate point M of the middle datum line: (m)x,my) And the position coordinate point R of the right reference line: (r)x,ry) If the position coordinate point M of the middle reference line is (0, 0), the width G of the grid cellw=rx-lxOtherwise Gw=mx-lx
D: assuming that the grid cells in all the Dunhuang relic images are all of equal width, selecting a Dunhuang relic image with known real physical size and obtaining the real physical width of the grid cell, and obtaining the wide grid cell of the Dunhuang relic reference image corresponding to the Dunhuang relic image according to the steps A to CCalculating the degree of the zoom ratio gamma of restoring the width value of the grid unit of the standard reference image of the Dunhuang relic film corresponding to the Dunhuang relic film image to the real physical size; wherein γ is β2Beta is the multiple relation between the width value of the grid unit of the Dunhuang relic film image and the real physical width value of the grid unit of the Dunhuang relic film;
e: carrying out edge detection on the Dunhuang relic image to extract edge lines of the Dunhuang relic image so as to obtain edge line images corresponding to each Dunhuang relic image;
in the invention, the Canny operator edge detection algorithm is used for automatically extracting the edge line of each Dunhuang relic fragment image, and is a conventional technology in the field and is not described herein any more.
After the first round of edge line extraction is finished, manually checking whether the edge line of each extracted Dunhuang relic image conforms to the real condition, selecting the Dunhuang relic image corresponding to the non-conforming edge line as the Dunhuang relic image to be rechecked, then adjusting the parameter of the Canny operator edge detection algorithm, carrying out edge extraction again on the Dunhuang relic image to be rechecked, and manually checking whether the extracted edge line conforms to the real condition; manually describing the edge lines of Dunhuang relic film images of which the edge lines cannot be accurately extracted through a Canny operator edge detection algorithm and storing the edge lines; finally obtaining the edge line of the Dunhuang relic image which is consistent with the real situation;
and finally, independently storing the edge line corresponding to each Dunhuang relic book remnant image as an edge line image with a transparent background, 3 pixels of edge line width, red color and RGBA four channels of image format.
F: and acquiring an edge line framework in the edge line image corresponding to each Dunhuang copy book residue image by using a computer to obtain the edge line framework image corresponding to each Dunhuang copy book residue image, wherein the edge line framework refers to a central pixel point in the edge line.
In the invention, the edge line skeleton in the edge line image is enhanced according to the pixel threshold Q for the obtained edge line image corresponding to each Dunhuang relic film image, and the non-edge line skeleton is used as the background to obtain the edge line skeleton image corresponding to each Dunhuang relic film image. The edge line skeleton refers to a centered pixel point in the edge line with the width of 3 pixels obtained in the step E;
in this embodiment, the pixel threshold Q is 174, and by using the color characteristics, the pixel point whose pixel value is less than or equal to (174, 0, 0, 255) is automatically set to (0, 0, 0, 0), otherwise to (255, 0, 0, 255), and finally the edge line skeleton image corresponding to each of the images of the Dunhuang relics is obtained; the four values in parentheses are R, G, B and the values of the A four channels, respectively;
g: and manually determining the left side and the right side broken edge parts in the edge line skeleton image of each Dunhuang relic image to obtain an edge line skeleton labeling image corresponding to each Dunhuang relic image.
And manually observing the edge line skeleton image of each Dunhuang relic book image, respectively determining the starting point and the end point of the left side and the right side broken edge part of the edge line skeleton in the edge line skeleton image, respectively drawing a blue color block with the side length of 1 pixel by using a pixel pen, and storing the blue color block, and finally obtaining the edge line skeleton labeling image which is labeled with the starting point and the end point of the broken edge part and corresponds to each Dunhuang relic book image.
H: and G, respectively carrying out time serialization processing on the left side and the right side broken edge parts of the edge line skeleton in the edge line skeleton labeling image obtained in the step G to obtain corresponding two-dimensional numerical value type time serialization data.
Respectively extracting the pixel positions of each pixel data of the broken edge parts on the left side and the right side of the edge line skeleton in the edge line skeleton labeling image obtained in the step G according to the sequence from top to bottom and from left to right, and then sequentially combining the pixel positions of the pixel data obtained in sequence to respectively form two-dimensional time-series data T corresponding to the broken edge part on the left side of the edge line skeletonlTwo-dimensional time-series data T corresponding to right-side broken edge portionrWherein, Tl={(Vl1,Wl1),(Vl2,Wl2),(Vl3,Wl3),…,(Vli,Wli)},Tr={(Vr1,Wr1),(Vr2,Wr2),(Vr3,Wr3),…,(Vri,Wri) I is a positive integer, (V)li,Wli) (V) pixel position of ith pixel data indicating a broken edge portion on the left side of the edge line skeletonri,Wri) And a pixel position of the ith pixel data of the broken edge part at the right side of the edge line skeleton.
I: and D, utilizing the scaling gamma and the multiple relation beta obtained in the step D to convert the position coordinate point L of the datum line obtained in the step B into a coordinate point L: (l)x,ly)、M:(mx,my)、R:(rx,ry)、U:(ux,uy) And D: (d)x,dy) And converting into a position coordinate point L' of the reference line restored to the true physical size: (l'x,l′y)、M′:(m′x,m′y)、R′:(r′x,r′y)、U′:(u′x,u′y) And D': (d'x,d′y) (ii) a Then the width G of the grid unit obtained in the step CwConverted into the width G 'of the mesh unit restored to the real physical size'w(ii) a And D, two-dimensional time-series data T corresponding to the broken edge parts on the left side and the right side of the edge line skeleton obtained in the step H are subjected tolAnd TrRespectively converting the two-dimensional time-series data into two-dimensional time-series data T 'corresponding to broken edge parts on the left side and the right side of the edge line skeleton after the actual physical dimensions are restored'lAnd T'r;T′l={(V′l1,W′l1),(V′l2,W′l2),(V′l3,W′l3),…,(V′li,W′li)},T′r={(V′r1,W′r1),(V′r2,W′r2),(V′r3,W′r3),…,(V′ri,W′ri) I is a positive integer, (V'li,W′li) And (V'ri,W′ri) Respectively representing the pixel positions of ith pixel data of the left and right broken edge parts restored to the real physical size;
then checking the obtained reference line position restored to the real physical size with the reference line position in the corresponding Dunhuang relic image with the known real physical size, checking the obtained grid unit width restored to the real physical size with the grid unit width in the corresponding Dunhuang relic image with the known real physical size, and checking the obtained pixel position of the pixel data of the broken edge part restored to the real physical size with the corresponding broken edge part in the Dunhuang relic image with the known real physical size.
When the position coordinate points L, M, R, U and D of the reference line are converted into the position coordinate points L ', M ', R ', U ', and D ' of the reference line restored to the true physical size, the position coordinate point L of the reference line is: (l)x,ly) Abscissa l of (5)xDo operation lxL is obtained from'xOrdinate lyDo operation lyL is obtained from'yAnd finally obtaining the position coordinate point L' of the datum line restored to the real physical size: (l'x,l′y) (ii) a Similarly, obtaining position coordinate points M' of the other reference lines: (m'x,m′y)、R′:(r′x,r′y)、U′:(u′x,u′y) And D': (d'x,d′y)。
To this end, in the position of the reference line restored to the true physical size, the position of the upper reference line is a horizontal straight line passing through the point U ', the position of the lower reference line is a horizontal straight line passing through the point D ', the position of the left reference line is a vertical straight line passing through the point L ', the position of the right reference line is a vertical straight line passing through the point R ', and the position of the middle reference line is a vertical straight line passing through the point M '.
After grid cell width GwConverted to grid cell width G 'restored to true physical dimensions'wThen, the grid cell width G is setwDo operation GwBeta to give G'w
In the two-dimensional time-series data TlAnd TrConversion into two-dimensional time-sequenced data T'lAnd T'rTime, two-dimensional time-series data TlV inliDo operation VliV is obtained from'li,WliDo operation WliW is obtained from'liTwo-dimensional time-serialized data TrV inriDo operation VriV is obtained from'ri,WriDo operation WriW is obtained from'ri
J: two-dimensional time-series data T 'obtained in step I'lV 'of'liAnd T'rV 'of'riAnd a position coordinate point L' of the reference line: (l'x,l′y) And R': (r'x,r′y) Respectively carrying out normalization processing to correspondingly obtain normalized time-series edge curve data TlAnd T ″)rAnd normalized reference line relative position coordinates L ″: (l ″)x,l″y) And R': (r'x,r″y),T″l={(V″l1,W″l1),(V″l2,W′l2),(V″l3,W″l3),…,(V″li,W″li)},T″r={(V″r1,W″r1),(V″r2,W″r2),(V″r3,W″r3),…,(V″ri,W″ri)}。
In the present invention, T 'is first calculated in the normalization process'lMiddle V'liMin (V'li) And T'rMiddle V'riMin (V'ri) Then two-dimensional time-serialized data T'lV 'of each data of'liAnd L ' in the position coordinate point L ' of the reference line 'xAll subtract min (V'li) Two-dimensional time-serialized data T'rV 'of each data'riAnd R ' in the position coordinate point R ' of the reference line 'xAll subtract min (V'ri) Respectively obtaining normalized time-series edge curve data T ″)lAnd T ″)rAnd the position coordinate point L ″ of the reference line: (l ″)x,l″y) And R': (r ″)x,r″y) (ii) a Wherein, W ″)li=W′li,W″ri=W′ri,l″y=l′y,r″y=r′y
K: respectively obtaining the time-sequenced edge curve data T' of the broken edge parts of the edge line skeletons of the two Dunhuang relic film images to be conjugated according to the step J after normalization treatmentlAnd T ″)rThen calculating the time sequence matching degree S of the two Dunhuang relic film images, and putting the time sequence matching degree S into a set S;
when calculating the time sequence matching degree of two Dunhuang relic images, firstly judging whether the upper part and/or the lower part of the Dunhuang relic images a and b have book boundaries:
if there is a book border on the upper and/or lower part of the dunhuang left-reading image a, and there is a book border on the upper and/or lower part of the dunhuang left-reading image b, the Dunhuang left image a and the Dunhuang left image b are respectively put into the left side and the right side of the virtual raster image, so that the transverse grid lines of the book existing in the Dunhuang left image a and the Dunhuang left image b are respectively aligned with the corresponding transverse grid lines in the virtual raster image, the horizontal grid lines of the books on the middle upper part and the lower part of the Dunhuang relic image a are respectively aligned with the horizontal grid lines on the middle upper part and the lower part of the virtual raster image, the horizontal grid lines of the books on the middle upper part and the lower part of the Dunhuang relic image b are respectively aligned with the horizontal grid lines on the middle upper part and the lower part of the virtual raster image, and the reference lines on the left side and the right side of the Dunhuang relic image a and the left side and the right side of the Dunhuang relic image b are respectively aligned with the vertical grid; keeping Dunhuang relic images a and b fixed in the virtual raster image, respectively calculating the normalized time-sequenced edges corresponding to the Dunhuang relic images a and bData T ″' of edge curveraAnd T ″)lbPartial curves T ″, which coincide in the vertical directionrasAnd T ″)lbsThe time sequence matching degree s between the images, the maximum distance d between the right side broken edge part of the Dunhuang relic film image a and the right side reference line of the Dunhuang relic film image a after restoring to the real physical sizeaAnd the minimum distance d between the broken edge part at the left side of the Dunhuang relic image b and the left reference line after the Dunhuang relic image b is restored to the real physical sizebRestoring each Dunhuang relic film image obtained in combination with the step I to the grid unit width G 'after the real physical size'wIf mod ((d)a+db),G′w) If the time sequence matching degree S is more than N, calculating the time sequence matching degree S by S multiplied by N to obtain S ', and then putting the time sequence matching degree S' into the set S, otherwise, directly putting the time sequence matching degree S into the set S; finally, the maximum value in the set S is taken as the maximum degree of engagement between the dunghuang relic images a and b.
If the upper part and/or the lower part of the Dunhuang left-handed book remnant image a has a book border and the Dunhuang left-handed book remnant image b has no book border, respectively placing the Dunhuang left-handed book remnant image a and the Dunhuang left-handed book remnant image b on the left side and the right side in the virtual raster image, respectively aligning the transverse grid lines of the book on the upper part and/or the lower part of the Dunhuang left-handed book remnant image a with the corresponding transverse grid lines in the virtual raster image, and respectively aligning the left-side reference lines and the right-side reference lines in the Dunhuang left-handed book remnant image a and the right-side reference lines in the Dunhuang left; keeping the Dunhuang relic image a fixed in the virtual raster image, and successively carrying out upper alignment and lower alignment on the broken edge parts of the Dunhuang relic images a and b; after the upper alignment and the lower alignment, the Dunhuang relic book remnant image b slides upwards in the virtual raster image in the set sliding range along the vertical direction by taking M pixels as a stride, then returns to the initial position, and finally slides downwards in the set sliding range along the vertical direction; after the initial position and each sliding of the Dunhuang relic image b, the normalized time-sequence edge curve data T' corresponding to the two Dunhuang relic images a and b is calculatedraAnd T ″)lbThe elements overlapping in the vertical directionCurve T ″)rasAnd T ″)lbsThe time sequence matching degree s between the images, the maximum distance d between the right side broken edge part of the Dunhuang relic film image a and the right side reference line of the Dunhuang relic film image a after restoring to the real physical sizeaAnd the minimum distance d between the broken edge part at the left side of the Dunhuang relic image b and the left reference line after the Dunhuang relic image b is restored to the real physical sizebRestoring each Dunhuang relic film image obtained in combination with the step I to the grid unit width G 'after the real physical size'wIf mod ((d)a+db),G′w) If the time sequence matching degree S is more than N, calculating the time sequence matching degree S by S multiplied by N to obtain S ', and then putting the time sequence matching degree S' into the set S, otherwise, directly putting the time sequence matching degree S into the set S; finally, the maximum value in the set S is taken as the maximum degree of engagement between the dunghuang relic images a and b.
If the Dunhuang left-hand book remnant image a has no book border, and the Dunhuang left-hand book remnant image b has a book border on the upper part and/or the lower part thereof, the Dunhuang left-hand book remnant image a and the Dunhuang left-hand book remnant image b are respectively placed on the left side and the right side of the virtual raster image, and the horizontal grid lines of the book on the upper part and/or the lower part of the Dunhuang left-hand book remnant image b are respectively aligned with the corresponding horizontal grid lines in the virtual raster image, and the left-side and right-side reference lines in the Dunhuang left-hand book remnant image a and the Dunhuang left-hand book remnant image b are; keeping the Dunhuang relic image b fixed in the virtual raster image, and successively carrying out upper alignment and lower alignment on the broken edge parts of the Dunhuang relic image a and the Dunhuang relic image b; after the upper alignment and the lower alignment, the Dunhuang relic book remnant image a slides upwards in the virtual raster image in the set sliding range along the vertical direction by taking M pixels as a stride, then returns to the initial position, and finally slides downwards in the set sliding range along the vertical direction; after the initial position and each sliding of the Dunhuang relic image a, the normalized time-series edge curve data T' corresponding to the two Dunhuang relic images a and b is calculatedraAnd T ″)lbPartial curves T ″, which coincide in the vertical directionrasAnd T ″)lbsTime sequence matching degree between s and DunhuangThe maximum distance d between the right side broken edge part of the book remnant image a and the right side reference line of the Dunhuang relic book remnant image a after the image a is restored to the real physical sizeaAnd the minimum distance d between the broken edge part at the left side of the Dunhuang relic image b and the left reference line after the Dunhuang relic image b is restored to the real physical sizebRestoring each Dunhuang relic film image obtained in combination with the step I to the grid unit width G 'after the real physical size'wIf mod ((d)a+db),G′w) If the time sequence matching degree S is more than N, calculating the time sequence matching degree S by S multiplied by N to obtain S ', and then putting the time sequence matching degree S' into the set S, otherwise, directly putting the time sequence matching degree S into the set S; finally, the maximum value in the set S is taken as the maximum degree of engagement between the dunghuang relic images a and b.
If the Dunhuang left and right images a and b have no book borders, respectively placing the Dunhuang left and right images a and b into the left and right sides of the virtual raster image, and respectively aligning the left and right reference lines in the Dunhuang left and right images a and b with the vertical grid lines in the virtual raster image; the broken edge parts of the Dunhuang left book images a and b are aligned up and down in sequence; after the upper alignment and the lower alignment, keeping the Dunhuang left book image a fixed in the virtual raster image, firstly sliding the Dunhuang left book image b upwards in the virtual raster image in a set sliding range along the vertical direction by taking M pixels as a stride, then returning to the initial position, and finally sliding downwards in the set sliding range along the vertical direction; after the initial position and each sliding of the Dunhuang relic image b, the normalized time-sequence edge curve data T' corresponding to the two Dunhuang relic images a and b is calculatedraAnd T ″)lbPartial curves T ″, which coincide in the vertical directionrasAnd T ″)lbsThe time sequence matching degree s between the images, the maximum distance d between the right side broken edge part of the Dunhuang relic film image a and the right side reference line of the Dunhuang relic film image a after restoring to the real physical sizeaAnd the minimum distance d between the broken edge part at the left side of the Dunhuang relic image b and the left reference line after the Dunhuang relic image b is restored to the real physical sizebRestoring each Dunhuang relic film image obtained in combination with the step I to the grid unit width G 'after the real physical size'wIf mod ((d)a+db),G′w) If the time sequence matching degree S is more than N, calculating the time sequence matching degree S by S multiplied by N to obtain S ', and then putting the time sequence matching degree S' into the set S, otherwise, directly putting the time sequence matching degree S into the set S; finally, the maximum value in the set S is taken as the maximum degree of engagement between the dunghuang relic images a and b.
In said step K, the virtual raster image is a blank image designed manually, the horizontal length of the virtual raster image is not less than the sum of the horizontal lengths of the two Dunhuang relic images a and b to be judged whether to be conjugated, the vertical height is not less than the larger value of the vertical heights of the two Dunhuang relic images a and b to be judged whether to be conjugated, and the virtual raster image is internally and uniformly provided with grids, the width of the grids is equal to the unit width of the grids after the Dunhuang relic images are restored to the real physical size, the height of the grids is equal to the distance between the transverse grid lines of the upper and lower books after the complete relic images are restored to the real physical size, M is 1, mod () is a remainder function, N is a grid width threshold, N is a penalty coefficient, the sliding range is after the broken edge parts of the Dunhuang relic images a and b are aligned up and down, from the upper P pixels to the lower P pixels of the alignment point.
The step K comprises the following specific steps:
k0: judging whether the upper part and/or the lower part of the two Dunhuang relic images a and b to be judged whether to be conjugated have book boundaries, and entering a step K1 if the Dunhuang relic images a and b do not have book boundaries; if there is a book border only on the upper and/or lower part of the Dunhuang left book remnant image a, proceed to step K2; if there is a book border only on the upper and/or lower part of the dunhuang left-hand book remnant image b, the process proceeds to step K3, if there is a book border on the upper and/or lower part of the dunhuang left-hand book remnant image a, and the process proceeds to step K4;
k1: creating a virtual raster image, respectively placing the Dunhuang relic images a and b into the left side and the right side in the virtual raster image, respectively aligning the reference lines of the left side and the right side in the Dunhuang relic images a and b with the vertical grid lines in the virtual raster image, and aligning the upper end point of the broken edge part of the Dunhuang relic image b with the upper end point of the broken edge part of the Dunhuang relic image a; then entering step K5;
k2: creating a virtual raster image, respectively placing the Dunhuang relic images a and b on the left side and the right side in the virtual raster image, respectively aligning the transverse grid lines of the books on the upper part and/or the lower part of the Dunhuang relic image a with the corresponding transverse grid lines in the virtual raster image, respectively aligning the reference lines on the left side and the right side in the Dunhuang relic images a and b with the vertical grid lines in the virtual raster image, and aligning the upper end point of the broken edge part of the Dunhuang relic image b with the upper end point of the broken edge part of the Dunhuang relic image a; then entering step K5;
k3: creating a virtual raster image, respectively placing the Dunhuang relic images a and b on the left side and the right side in the virtual raster image, respectively aligning the transverse grid lines of the books on the upper part and/or the lower part of the Dunhuang relic image b with the corresponding transverse grid lines in the virtual raster image, respectively aligning the reference lines on the left side and the right side in the Dunhuang relic images a and b with the vertical grid lines in the virtual raster image, and aligning the upper end point of the broken edge part of the Dunhuang relic image a with the upper end point of the broken edge part of the Dunhuang relic image b; then entering step K5;
k4: creating a virtual raster image, respectively placing the Dunhuang relic images a and b into the left side and the right side in the virtual raster image, respectively aligning the transverse grid lines of the books existing in the Dunhuang relic images a and b with the corresponding transverse grid lines in the virtual raster image, and respectively aligning the reference lines of the left side and the right side in the Dunhuang relic images a and b with the vertical grid lines in the virtual raster image; then entering step K5;
k5: under the current position of Dunhuang relic image b in virtual raster image, calculating the normalized time-sequence marginal curve data T' corresponding to Dunhuang relic image a and braAnd T ″)lbPartial curves T ″, which coincide in the vertical directionrasAnd T ″)lbsThe time series matching degree s between the two, and then the step K6 is carried out;
k6: the normalized time-series edge curve data T ″raAnd T ″)lbThe respective head and tail end points are directly connected to form the line length LaAnd Lb,max(La,Lb) The greater of the two; then T ″', andlbsthe head and tail end points of the line are directly connected, so that the length of the formed line is Lc(ii) a If L iscIf the length is larger than or equal to the length threshold, the step K7 is carried out; otherwise, go to step K8;
in this embodiment, the length threshold is max (L)a,Lb) 77% of the value;
k7: first, a sub-curve T ″, is calculatedrasAnd T ″)lbsTime series matching degree s between: calculating a sub-curve T ″rasAnd T ″)lbsForming a difference array d by the obtained data differences in sequence according to the data differences of the abscissa of each corresponding position, and recording the number of elements with the median of the difference array d being less than or equal to the difference threshold as tc(ii) a Combining the length L of the line segment obtained in the step K6cAnd calculating to obtain the matching degree s of the time series as tc/Lc
Then calculating the maximum distance d between the right side broken edge part of the Dunhuang relic film image a and the right side reference line of the Dunhuang relic film image a after restoring to the real physical sizeaAnd the minimum distance d between the broken edge part at the left side of the Dunhuang relic image b and the left reference line after the Dunhuang relic image b is restored to the real physical sizeb
Finally, restoring each Dunhuang relic film image obtained in combination with the step I to the grid unit width G 'after the real physical size'wIf mod ((d)a+db),G′w) If the time sequence matching degree S is more than N, calculating the time sequence matching degree S by S multiplied by N to obtain S ', and then putting the time sequence matching degree S' into the set S, otherwise, directly putting the time sequence matching degree S into the set S; then entering step K9;
in this embodiment, the difference threshold is 2.2;
k8: sub-curve T ″)rasAnd T ″)lbsSetting the time sequence matching degree S between the sets as 0, and putting the value of S into a set S; then entering step K9;
k9: if there is book boundary on the upper and/or lower part of Dunhuang left-hand book image a and there is book boundary on the upper and/or lower part of Dunhuang left-hand book image b, then go to step L; if there is a book border only on the upper and/or lower part of the Dunhuang left book remnant image b, proceed to step K13; if the Dunhuang left book remnant image b has no book boundary, then go to step K10;
k10: sliding the Dunhuang left-reading image b upwards and downwards in the virtual raster image by taking 1 pixel as a stride and taking the upper end point of the broken edge part of the Dunhuang left-reading image a as a reference point respectively, wherein the sliding range does not exceed the upper end point and the lower end point of the broken edge part of the Dunhuang left-reading image a by P pixels; repeating the steps K5 to K8 after each movement until the Dunhuang relic image b slides to the boundary of the sliding range in the virtual raster image, namely the upper end point of the broken edge part of the image b is respectively superposed with two pixel points at the upper end and the lower end of the sliding range; then entering step K11;
k11: maintaining the Dunhuang left and right images a and b on the left and right sides of the virtual raster image, respectively, aligning the reference line positions of the left and right sides of the Dunhuang left and right images a and b with the vertical grid lines of the virtual raster image, aligning the lower end point of the broken edge part of the Dunhuang left and right images a with the lower end point of the broken edge part of the Dunhuang left and right images b, and sequentially performing the steps K5 to K8; after the step K8 is finished, directly entering the step K12;
k12: taking 1 pixel as a stride, sliding the Dunhuang left-reading image b upwards and downwards in the virtual raster image by taking the lower end point of the broken edge part of the Dunhuang left-reading image a as a reference point respectively, and the sliding range does not exceed the upper and lower P pixels of the lower end point of the broken edge part of the Chinese character image a; repeating the steps K5 to K8 after each movement until the Dunhuang relic image b slides to the boundary of the sliding range in the virtual raster image, namely the lower end point of the broken edge part of the image b is respectively superposed with two pixel points at the upper end and the lower end of the sliding range; then entering a step L;
k13: taking 1 pixel as a stride, sliding the Dunhuang left-reading image a upwards and downwards respectively in the virtual raster image by taking the upper endpoint of the broken edge part of the Dunhuang left-reading image b as a reference point, wherein the sliding range does not exceed the upper endpoint of the broken edge part of the Chinese character image b and P pixels are arranged above and below the upper endpoint of the broken edge part of the Chinese character image b, repeating the steps K5 to K8 after each movement until the Dunhuang left-reading image a slides to the boundary of the sliding range in the virtual raster image, namely the upper endpoint of the broken edge part of the image a is respectively superposed with two pixel points at the upper end and the lower end of the sliding range; then entering step K14;
k14: maintaining the Dunhuang left and right images a and b on the left and right sides of the virtual raster image, aligning the reference line positions of the left and right sides of the Dunhuang left and right images a and b with the vertical grid lines in the virtual raster image, aligning the book borders on the upper and lower parts of the Dunhuang left and right images b with the horizontal grid lines on the upper and lower parts of the virtual raster image, aligning the lower end point of the broken edge part of the Dunhuang left and right images a with the lower end point of the broken edge part of the Dunhuang left and right images b, and sequentially executing the steps K5 to K8; after the step K8 is finished, directly entering the step K15;
k15, sliding the Dunhuang left-reading image a upwards and downwards in the virtual raster image with the lower end point of the broken edge part of the Dunhuang left-reading image b as the reference point, and the sliding range does not exceed the upper end point of the broken edge part of the Chinese character image b and the upper and lower P pixels respectively; repeating the steps K5 to K8 after each movement until the Dunhuang relic image a slides to the boundary of the sliding range in the virtual raster image, namely the upper end point of the broken edge part of the image a is respectively superposed with two pixel points at the upper end and the lower end of the sliding range; then entering a step L;
l: for Dunhuang left-hand book remnant image a, calculating the time sequence matching degree in turn with each Dunhuang left-hand book remnant image in the folder to be compared according to the method in step K; and finally, sorting the images from large to small according to the time sequence matching degree values, and returning the front H images with the highest matching degree with the time sequence of the Dunhuang relic image a by taking the priority of smaller sliding distance if the time sequence matching degrees are the same as each other as the alternative images with higher conjugation degree with the Dunhuang relic image a.
In this example, H is 5.

Claims (10)

Translated fromChinese
1.一种敦煌遗书残片图像的自动缀合方法,其特征在于,包括以下步骤:1. a kind of automatic conjugation method of Dunhuang suicide note fragment image, is characterized in that, comprises the following steps:A:人工确定敦煌遗书残片图像上侧、下侧、左侧和右侧四个边缘处的基准线以及与左侧基准线紧邻的中部基准线,得到敦煌遗书残片基准参照图像;A: Manually determine the reference lines at the four edges of the upper, lower, left and right edges of the Dunhuang suicide note fragment image and the central reference line adjacent to the left reference line to obtain the Dunhuang suicide note fragment reference image;B:利用计算机定位步骤A得到的敦煌遗书残片基准参照图像中上侧基准线的位置坐标点U、下侧基准线的位置坐标点D、左侧基准线的位置坐标点L、右侧基准线的位置坐标点R以及中部基准线的位置坐标点M的位置;B: The position coordinate point U of the upper reference line, the position coordinate point D of the lower reference line, the position coordinate point L of the left reference line, and the position coordinate point L of the left reference line, and the right reference line in the reference image of the Dunhuang suicide note fragment obtained by the computer positioning step A The position of the position coordinate point R and the position of the position coordinate point M of the central reference line;C:利用计算机计算敦煌遗书残片基准参照图像中网格单元的宽度;C: Calculate the width of the grid cells in the reference image of the Dunhuang suicide note fragment by computer;D:选择一幅已知真实物理尺寸的敦煌遗书残片图像并获取其网格单元的真实物理宽度,并按照步骤A至C得到该幅敦煌遗书残片图像对应的敦煌遗书残片基准参照图像的网格单元的宽度,计算得出敦煌遗书残片图像所对应的敦煌遗书残片基准参照图像的网格单元的宽度值恢复到真实物理尺寸的缩放比例γ;其中,γ=β2,β为敦煌遗书残片图像的网格单元的宽度值与敦煌遗书残片的网格单元的真实物理宽度值的倍数关系;D: Select a Dunhuang suicide note fragment image with known real physical size and obtain the real physical width of its grid unit, and follow steps A to C to obtain the Dunhuang suicide note fragment reference image grid corresponding to the Dunhuang suicide note fragment image The width of the cell is calculated, and the width value of the grid cell of the Dunhuang suicide note fragment reference image corresponding to the Dunhuang suicide note fragment image is restored to the scaling ratio γ of the real physical size; where, γ=β2 , β is the Dunhuang suicide note fragment image The multiple relationship between the width value of the grid unit and the real physical width value of the grid unit of the Dunhuang suicide note fragment;E:对敦煌遗书残片图像进行边缘检测以提取敦煌遗书残片图像的边缘线,得到每幅敦煌遗书残片图像对应的边缘线图像;E: Perform edge detection on the images of the Dunhuang suicide note fragments to extract the edge lines of the Dunhuang suicide note fragments images, and obtain the edge line images corresponding to each Dunhuang suicide note fragment image;F:利用计算机获取每幅敦煌遗书残片图像所对应的边缘线图像中的边缘线骨架,得到每幅敦煌遗书残片图像所对应的边缘线骨架图像,边缘线骨架指边缘线中居中的像素点;F: Use a computer to obtain the edge line skeleton in the edge line image corresponding to each Dunhuang suicide note fragment image, and obtain the edge line skeleton image corresponding to each Dunhuang suicide note fragment image, and the edge line skeleton refers to the pixel point in the center of the edge line;G:人工确定每幅敦煌遗书残片图像的边缘线骨架图像中的左侧和右侧断边部分,得到每幅敦煌遗书残片图像对应的边缘线骨架标注图像;G: Manually determine the left and right broken edge parts in the edge line skeleton image of each Dunhuang suicide note fragment image, and obtain the edge line skeleton annotation image corresponding to each Dunhuang suicide note fragment image;H:对步骤G中得到的边缘线骨架标注图像中边缘线骨架的左侧与右侧断边部分,分别通过时间序列化处理得到对应的二维数值型的时间序列化数据;H: Label the left and right broken edge parts of the edge line skeleton in the image on the edge line skeleton obtained in step G, and obtain corresponding two-dimensional numerical time-series data through time-serialization respectively;I:利用步骤D中得到的缩放比例γ和倍数关系β,将步骤B中得到的基准线的位置坐标点L:(lx,ly)、M:(mx,my)、R:(rx,ry)、U:(ux,uy)和D:(dx,dy),转化为恢复到真实物理尺寸后的基准线的位置坐标点L′:(l′x,l′y)、M′:(m′x,m′y)、R′:(r′x,r′y)、U′:(u′x,u′y)和D′:(d′x,d′y);然后将步骤C中得到的网格单元的宽度Gw,转化为恢复到真实物理尺寸后的网格单元的宽度G′w;再将步骤H中得到的边缘线骨架左侧和右侧的断边部分所对应的二维时间序列化数据Tl和Tr,分别转化为恢复到真实物理尺寸后的边缘线骨架左侧和右侧的断边部分所对应的二维时间序列化数据T′l和T′r;T′l={(V′l1,W′l1),(V′l2,W′l2),(V′l3,W′l3),…,((W′li,(W′li)},T′r={(V′r1,W′r1),(V′r2,W′r2),(V′r3,W′r3),…,(V′ri,W′ri)},i为正整数,(V′li,W′li)和(V′ri,W′ri)分别表示表示恢复到真实物理尺寸后的左侧和右侧断边部分第i个像素数据的像素位置;I: Using the scaling ratio γ and the multiplication relationship β obtained in step D, the position coordinate points L:(lx ,ly ), M:(mx , myy ), R: of the reference line obtained in step B: (rx , ry ), U: (ux , uy ) and D: (dx ,dy ), converted to the position coordinates of the reference line after restoring to the real physical size L′:( l′x , l'y ), M': (m'x , m'y ), R': (r'x , r'y ), U': (u'x , u'y ) and D': (d ′x , d′y ); then convert the width Gw of the grid cell obtained in step C into the width G′w of the grid cell after restoring to the real physical size; then convert the edge line obtained in step H The two-dimensional time-serialized data Tl and Tr corresponding to the broken edge parts on the left and right sides of the skeleton are respectively converted into the edge line corresponding to the broken edge parts on the left and right sides of the skeleton after restoring to the real physical size. Two-dimensional time series dataT'l andT'r ;T'l = {(V'l1 ,W'l1 ), (V'l2 ,W'l2 ), (V'l3 ,W'l3 ), ... , ((W′li , (W′li )}, T′r = {(V′r1 , W′r1 ), (V′r2 , W′r2 ), (V′r3 , W′r3 ), … , (V′ri , W′ri )}, i is a positive integer, (V′li , W′li ) and (V′ri , W′ri ) represent the left and right sides after restoring to the real physical size, respectively The pixel position of the i-th pixel data of the side-cut edge part;J:将步骤I中得到的二维时间序列化数据T′l中的V′li和T′r中的V′ri,以及基准线的位置坐标点L′:(l′x,l′y)和R′:(r′x,r′y)分别进行归一化处理,对应得到归一化后的时间序列化边缘曲线数据T″l和T″r以及归一化后的基准线相对位置坐标L″:(l″x,l″y)和R″:(r″x,r″y),T″l={(V″l1,(W″l1),(V″l2,(W′l2),(V″l3,(W″l3),…,(V″li,(W″li)},T″r={(V″r1,W″r1),(V″r2,W′r2),(V″r3,W″r3),…,(V″ri,W″ri)};J: V'li in the two-dimensional time-serialized data T'l and V'ri in T'r obtained in step I, and the position coordinate point L' of the reference line: (l'x , l'y ) and R′: (r′x , r′y ) are respectively normalized, corresponding to the normalized time-serialized edge curve data T″l and T″r and the normalized baseline relative Position coordinates L″: (l″x , l″y ) and R″: (r″x , r″y ), T″l = {(V″l1 , (W″l1 ), (V″l2 , ( W′l2 ), (V″l3 , (W″l3 ), …, (V″li , (W″li )}, T″r = {(V″r1 , W″r1 ), (V″r2 , W′r2 ), (V″r3 , W″r3 ), …, (V″ri , W″ri )};K:对待缀合的两幅敦煌遗书残片图像,按照步骤J分别得到两幅敦煌遗书残片图像的边缘线骨架的断边部分经归一化处理后的时间序列化边缘曲线数据Tl″和T″r,然后计算两幅敦煌遗书残片图像的时间序列匹配度s,并将时间序列匹配度s放入集合S中;K: The two images of Dunhuang posthumous writing fragments to be conjugated, respectively, according to step J, to obtain the time-serialized edge curve data Tl ″ and T after normalization of the broken edge parts of the edge line skeletons of the two Dunhuang posthumous writing fragments images "r , then calculate the time series matching degree s of the two Dunhuang posthumous fragments images, and put the time series matching degree s into the set S;L:对敦煌遗书残片图像a,将其与待比较的文件夹中的每幅敦煌遗书残片图像均按照步骤K中的方法依次计算时间序列匹配度;最后按照时间序列匹配度数值从大到小排序,若时间序列匹配度相同则以滑动距离较小的优先,最后返回与敦煌遗书残片图像a时间序列匹配度最高的前H幅图像,作为与敦煌遗书残片图像a缀合度较高的备选图像。L: For the Dunhuang suicide note fragment image a, compare it with each Dunhuang suicide note fragment image in the folder to be compared, and calculate the time series matching degree in turn according to the method in step K; finally, according to the time series matching degree value from large to small Sort, if the matching degree of the time series is the same, the one with the smaller sliding distance will be given priority, and finally return the first H images with the highest matching degree of time series with the Dunhuang suicide note fragment image a, as an alternative with a higher degree of conjugation with the Dunhuang suicide note fragment image a image.2.根据权利要求1所述的敦煌遗书残片图像的自动缀合方法,其特征在于,步骤A中:2. the automatic conjugation method of Dunhuang suicide note fragment image according to claim 1, is characterized in that, in step A:首先,对于上部存在书卷边界的敦煌遗书残片图像,在敦煌遗书残片图像上侧的书卷横向网格线处使用像素笔描绘宽度为1像素的第一种颜色的横线作为上侧基准线;对于下部存在书卷边界的敦煌遗书残片图像,在敦煌遗书残片图像下侧书卷横向网格线处使用像素笔描绘宽度为1像素的第一种颜色的横线作为下侧基准线;First, for the image of the Dunhuang suicide note fragment with the scroll boundary in the upper part, use a pixel pen to draw a horizontal line of the first color with a width of 1 pixel at the horizontal grid line of the scroll on the upper side of the Dunhuang suicide note fragment image as the upper reference line; For the image of the Dunhuang suicide note fragment with the boundary of the scroll in the lower part, use a pixel pen to draw a horizontal line of the first color with a width of 1 pixel at the horizontal grid line of the scroll below the image of the Dunhuang suicide note fragment as the lower reference line;然后,在敦煌遗书残片图像最靠近左侧断边且未中断的书卷竖向网格线处使用像素笔描绘宽度为1像素的第二种颜色的竖线作为左侧基准线,在敦煌遗书残片图像最靠近右侧断边且未中断的书卷竖向网格线处使用像素笔描绘宽度为1像素的第二种颜色的竖线作为右侧基准线;Then, use a pixel pen to draw a vertical line of the second color with a width of 1 pixel at the vertical grid line of the scroll that is closest to the broken and unbroken edge of the Dunhuang suicide note fragment image as the left reference line. Use a pixel pen to draw a vertical line of the second color with a width of 1 pixel at the vertical grid line of the book that is closest to the broken and unbroken edge on the right side of the image as the right reference line;最后,判断敦煌遗书残片图像中左侧基准线的右侧是否相邻有除右侧基准线外的书卷竖向网格线,若存在,则在该书卷竖向网格线处使用像素笔描绘宽度为1像素的第二种颜色的竖线作为中部基准线。Finally, it is judged whether there is a vertical grid line of the scroll on the right side of the reference line on the left side of the image of the Dunhuang posthumous note fragment adjacent to the reference line on the right side. If there is, use a pixel pen to draw the vertical grid line of the scroll A vertical line of the second color with a width of 1 pixel serves as the middle reference line.3.根据权利要求2所述的敦煌遗书残片图像的自动缀合方法,其特征在于,步骤B中:3. the automatic conjugation method of Dunhuang suicide note fragment image according to claim 2, is characterized in that, in step B:首先设置基准线的位置坐标点U、D、L、M和R的初始坐标均为(0,0),然后利用颜色特征,在经过敦煌遗书残片基准参照图像竖直方向中点的水平直线上,从左到右依次提取所有符合第二种颜色像素值的像素数据的像素位置,若提取到两个像素数据的像素位置则依次保存为L:(lx,ly)和R:(rx,ry),若提取到三个像素数据的像素位置则依次保存为L:(lx,ly)、M:(mx,my)和R:(rx,ry);First, set the initial coordinates of the position coordinates U, D, L, M and R of the reference line to be (0, 0), and then use the color feature to create a horizontal line that passes through the midpoint of the vertical direction of the reference image of the Dunhuang posthumous fragment reference image. , and sequentially extract the pixel positions of all pixel data that match the pixel value of the second color from left to right. If the pixel positions of two pixel data are extracted, they are stored as L:(lx ,ly ) and R:(rx ,ry ), if the pixel positions of the three pixel data are extracted, they are sequentially stored as L:(lx ,ly ), M:(mx , my ) and R:( rx ,ry );在经过敦煌遗书残片基准参照图像水平方向中点的竖直直线上,从上到下依次提取所有符合第一种颜色像素值的像素数据的像素位置,若提取到两个像素数据的像素位置则依此保存为U:(ux,uy)和D:(dx,dy),若仅提取到一个像素数据的像素位置,则判断该像素数据的像素位置是否位于敦煌遗书残片基准参照图像上部,若是则保存为U:(ux,uy),否则保存为D:(dx,dy)。On the vertical line passing through the midpoint of the horizontal direction of the reference image of the Dunhuang posthumous fragment reference image, extract the pixel positions of all pixel data that conform to the pixel value of the first color in sequence from top to bottom. If the pixel positions of two pixel data are extracted, then Save as U:(ux , uy ) and D:(dx ,dy ), if only the pixel position of one pixel data is extracted, then determine whether the pixel position of the pixel data is located in the Dunhuang posthumous writing fragment reference reference The upper part of the image, if it is, it is saved as U:(ux , uy ), otherwise it is saved as D:(dx ,dy ).4.根据权利要求3所述的敦煌遗书残片图像的自动缀合方法,其特征在于:步骤C中:根据步骤B中得到的敦煌遗书残片基准参照图像中左侧基准线的位置坐标点L:(lx,ly)、中部基准线的位置坐标点M:(mx,my)和右侧基准线的位置坐标点R:(rx,ry)的位置,若中部基准线的位置坐标点M为(0,0),则网格单元的宽度Gw=rx-lx,否则Gw=mx-lx4. the automatic conjugation method of Dunhuang suicide note fragment image according to claim 3, is characterized in that: in step C: according to the position coordinate point L of the left reference line in the Dunhuang suicide note fragment benchmark reference image obtained in step B: (lx ,ly ), the position coordinate point M: (mx , my ) of the middle reference line and the position coordinate point R: (rx ,ry ) of the right reference line, if the position of the middle reference line is If the position coordinate point M is (0, 0), then the width of the grid cell is Gw =rx -lx , otherwise Gw =mx -lx .5.根据权利要求4所述的敦煌遗书残片图像的自动缀合方法,其特征在于,步骤F中:对得到的每幅敦煌遗书残片图像对应的边缘线图像,依据像素阈值Q将边缘线图像中的边缘线骨架增强,并将非边缘线骨架置为背景,得到每幅敦煌遗书残片图像所对应的边缘线骨架图像;边缘线骨架是指步骤E中得到的宽度为3像素的边缘线中居中的像素点。5. The method for automatically conjugating images of Dunhuang posthumous note fragments according to claim 4, wherein in step F: for each obtained edge line image corresponding to the Dunhuang posthumous note fragment images, the edge line images are combined according to the pixel threshold Q The edge line skeleton in the step E is enhanced, and the non-edge line skeleton is set as the background, and the edge line skeleton image corresponding to each Dunhuang posthumous fragment image is obtained; the edge line skeleton refers to the edge line obtained in step E with a width of 3 pixels. Centered pixel.6.根据权利要求5所述的敦煌遗书残片图像的自动缀合方法,其特征在于:步骤H中:分别对步骤G中得到的边缘线骨架标注图像中边缘线骨架左侧和右侧的断边部分的每个像素数据的像素位置,按照从上到下及从左到右的顺序进行提取,然后将依次得到的像素数据的像素位置顺序组合在一起,分别构成边缘线骨架左侧的断边部分所对应的二维时间序列化数据Tl和右侧的断边部分所对应的二维时间序列化数据Tr,其中,Tl={(V11,Wl1),(Vl2,Wl2),(Vl3,W13),...,(Vli,Wli)),Tr={(Vr1,Wr1),(Vr2,Wr2),(Vr3,Wr3),…,(Vri,Wri)),i为正整数,(Vli,Wli)表示边缘线骨架左侧的断边部分的第i个像素数据的像素位置,(Vri,Wri)表示边缘线骨架右侧的断边部分的第i个像素数据的像素位置。6. The method for automatically conjugating images of Dunhuang posthumous note fragments according to claim 5, wherein in step H: mark the left and right edges of the edge line skeleton in the image respectively on the edge line skeleton obtained in step G. The pixel position of each pixel data of the edge part is extracted in the order from top to bottom and from left to right, and then the pixel positions of the pixel data obtained in turn are combined together to form the left side of the edge line skeleton respectively.The two-dimensional time-serialized data T1 corresponding to the edge part and the two-dimensional time-serialized data Tr corresponding to the broken edge parton the right side, where T1={ (V11 ,W11 ), (V12 , Wl2 ), (Vl3 , W13 ), ..., (Vli , Wli )), Tr ={(Vr1 ,Wr1 ),(Vr2 ,Wr2 ),(Vr3 ,Wr3),.__ Wri ) represents the pixel position of the i-th pixel data of the broken edge portion on the right side of the edge line skeleton.7.根据权利要求6所述的敦煌遗书残片图像的自动缀合方法,其特征在于:步骤I中:7. the automatic conjugation method of Dunhuang suicide note fragment image according to claim 6, is characterized in that: in step 1:在将基准线的位置坐标点L、M、R、U和D转化为恢复到真实物理尺寸后的基准线的位置坐标点L′、M′、R′、U′和D′时,将基准线的位置坐标点L:(lx,ly)中的横坐标lx做运算lx/γ得到l′x,纵坐标ly做运算ly/γ得到l′y,最终得到恢复到真实物理尺寸后的基准线的位置坐标点L′:(l′x,l′y);同理,得到其余基准线的位置坐标点M′:(m′x,m′y)、R′:(r′x,r′y)、U′:(u′x,u′y)和D′:(d′x,d′y);When converting the position coordinate points L, M, R, U and D of the reference line into the position coordinate points L', M', R', U' and D' of the reference line after restoring to the real physical size, the reference line Line position coordinate point L: (lx ,ly ) in the abscissa lx do the operation lx /γ to get l′x , and the ordinately do the operationly /γ to get l′y , and finally get back to The position coordinate point L′ of the reference line after the real physical size: (l′x , l′y ); in the same way, the position coordinate points M′ of the remaining reference lines are obtained: (m′x , m′y ), R′ :(r′x , r′y ), U′: (u′x , u′y ) and D′: (d′x , d′y );在将网格单元宽度Gw转化为恢复到真实物理尺寸后的网格单元宽度G′w时,将网格单元宽度Gw做运算Gw/β得到G′wWhen the grid cell width Gw is converted into the grid cell width G'w after restoring to the real physical size, the grid cell width Gw is calculated by Gw /β to obtain G'w;在将二维时间序列化数据Tl和Tr转化为二维时间序列化数据Tl′和T′r时,将二维时间序列化数据Tl中的Vli做运算Vli/γ得到V′li,Wli做运算Wli/γ得到W′li,将二维时间序列化数据Tr中的Vri做运算Vri/γ得到V′ri,Wri做运算Wri/γ得到W′riWhen converting the two-dimensional time-serialized data Tl and Tr into two-dimensional time-serialized data Tl ′ and T′r , the Vli in the two-dimensional time-serialized data Tl isoperated to obtain Vli /γ V′li , Wli do the operation Wli /γ to get W′li , and Vri in the two-dimensional time serialized data Tr isoperated to Vri /γ to obtain V′ri , and Wri is operated to Wri /γ to obtainW'ri .8.根据权利要求7所述的敦煌遗书残片图像的自动缀合方法,其特征在于:步骤J中:8. the automatic conjugation method of Dunhuang suicide note fragment image according to claim 7, is characterized in that: in step J:在进行归一化处理时,首先计算T′l中V′li的最小值min(V′li)以及T′r中V′ri的最小值min(V′ri),然后将二维时间序列化数据T′l中的每个数据的V′li以及基准线的位置坐标点L′中的l′x都减去min(V′li),将二维时间序列化数据T′r中每个数据的V′ri以及基准线的位置坐标点R′中的r′x都减去min(V′ri),分别得到归一化后的时间序列化边缘曲线数据T″l和T″r以及基准线的位置坐标点L″:(l″x,l″y)和R″:(r″x,r″y);其中,W″li=W′li,W″ri=W′ri,l″y=l′y,r″y=r′yDuring normalization, first calculate the minimum value min(V'li ) of V'li in T'l and the minimum value min(V'ri ) of V'ri in T'r , and then convert the two-dimensional time series Min(V'li ) is subtracted from V'li of each data in the serialized data T'l and l'x in the position coordinate pointL ' of the reference line. Min(V'ri ) is subtracted from V'ri of each data and r'x in the position coordinate point R' of the reference line to obtain the normalized time-series edge curve data T″l and T″r respectively. And the position coordinate points of the reference line L″: (l″x , l″y ) and R″: (r″x , r″y ); where W″li =W′li , W″ri =W′ri , l″y =l′y , r″y =r′y .9.根据权利要求8所述的敦煌遗书残片图像的自动缀合方法,其特征在于:步骤K中:在计算两幅敦煌遗书残片图像的时间序列匹配度时,先判断敦煌遗书残片图像a和b的上部和/或下部是否存在书卷边界:9. The method for automatically conjugating images of Dunhuang posthumous note fragments according to claim 8, wherein in step K: when calculating the time series matching degree of two Dunhuang posthumous note fragments images, first judge the Dunhuang posthumous note fragment images a and . Is there a volume boundary in the upper and/or lower part of b:若敦煌遗书残片图像a的上部和/或下部存在书卷边界,且敦煌遗书残片图像b的上部和/或下部存在书卷边界,则将敦煌遗书残片图像a和b分别放入虚拟栅格图像中的左侧和右侧,使敦煌遗书残片图像a和b中存在的书卷横向网格线分别与虚拟栅格图像中对应的网格横线对齐,即敦煌遗书残片图像a中上部和下部的书卷横向网格线分别与虚拟栅格图像中上部和下部的的网格横线对齐,敦煌遗书残片图像b中上部和下部的书卷横向网格线分别与虚拟栅格图像中上部和下部的网格横线对齐,敦煌遗书残片图像a和b中左侧及右侧的基准线分别与虚拟栅格图像中的网格竖线对齐;保持敦煌遗书残片图像a和b在虚拟栅格图像中固定不变,分别计算敦煌遗书残片图像a和b对应的经归一化处理后的时间序列化边缘曲线数据T″ra和T″lb在竖直方向上重合部分的子曲线T″ras和T″lbs之间的时间序列匹配度s、敦煌遗书残片图像a右侧断边部分与敦煌遗书残片图像a恢复到真实物理尺寸后的右侧基准线的最大距离da以及敦煌遗书残片图像b左侧断边部分与敦煌遗书残片图像b恢复到真实物理尺寸后的左侧基准线的最小距离db,结合步骤I得到的每幅敦煌遗书残片图像恢复到真实物理尺寸后的网格单元宽度G′w,若mod((da+db),G′w)>N,则将时间序列匹配度s做运算s×n得到s′,然后将时间序列匹配度s′放入集合S中,否则直接将时间序列匹配度s放入集合S中;最后将集合S中的最大值作为敦煌遗书残片图像a和b之间的最大缀合度;If there is a scroll boundary in the upper and/or lower part of the Dunhuang suicide note fragment image a, and there is a scroll boundary in the upper and/or lower part of the Dunhuang suicide note fragment image b, then put the Dunhuang suicide note fragment images a and b into the virtual grid image respectively. On the left and right sides, the horizontal grid lines of the scrolls in the images a and b of the Dunhuang suicide note fragments are aligned with the corresponding grid horizontal lines in the virtual grid image respectively, that is, the horizontal scrolls in the upper and lower parts of the Dunhuang suicide note fragment image a. The grid lines are respectively aligned with the upper and lower grid horizontal lines in the virtual grid image, and the upper and lower scroll horizontal grid lines in the Dunhuang posthumous fragment image b are respectively aligned with the upper and lower grid horizontal grid lines in the virtual grid image. Line alignment, the reference lines on the left and right sides of the Dunhuang suicide note fragment images a and b are respectively aligned with the grid vertical lines in the virtual grid image; keep the Dunhuang suicide note fragment images a and b fixed in the virtual grid image , calculate the difference between the sub-curves T″ras and T″lbs of the vertical overlapping part of the normalized time-serialized edge curve data T″ra and T″lb corresponding to the Dunhuang posthumous note fragments images a and b respectively. The time series matching degree s between the two, the maximum distance da between the right broken edge of the Dunhuang suicide note fragment image a and the right baseline after the Dunhuang suicide note fragment image a is restored to its true physical size, and the left broken edge of the Dunhuang suicide note fragment image b The minimum distance db of the left reference line after the part and the image b of the Dunhuang posthumous note fragments are restored to the real physical size, combined with the grid cell widthG′w after each image of the Dunhuang posthumous note fragments obtained in step I is restored to the real physical size, If mod((da +db ), G′w )>N, then the time series matching degree s is calculated by s×n to get s′, and then the time series matching degree s′ is put into the set S, otherwise, directly The time series matching degree s is put into the set S; finally, the maximum value in the set S is taken as the maximum conjugation degree between the images a and b of the Dunhuang suicide note fragments;若敦煌遗书残片图像a的上部和/或下部存在书卷边界,敦煌遗书残片图像b不存在书卷边界,则将敦煌遗书残片图像a和b分别放入虚拟栅格图像中的左侧和右侧,使敦煌遗书残片图像a上部和/或下部的书卷横向网格线分别与虚拟栅格图像中对应的网格横线对齐,敦煌遗书残片图像a和b中左侧及右侧的基准线分别与虚拟栅格图像中的网格竖线对齐;保持敦煌遗书残片图像a在虚拟栅格图像中固定不变,先后对敦煌遗书残片图像a和b的断边部分进行上对齐和下对齐;在上对齐和下对齐后,先将敦煌遗书残片图像b以M像素为步幅在虚拟栅格图像中沿竖直方向在设定的滑动范围内向上滑动,然后回到初始位置,最后沿竖直方向在设定的滑动范围内向下滑动;在敦煌遗书残片图像b的初始位置及每次滑动后,计算两幅敦煌遗书残片图像a和b对应的经归一化处理后的时间序列化边缘曲线数据T″ra和T″lb在竖直方向上重合部分的子曲线T″ras和T″lbs之间的时间序列匹配度s、敦煌遗书残片图像a右侧断边部分与敦煌遗书残片图像a恢复到真实物理尺寸后的右侧基准线的最大距离da以及敦煌遗书残片图像b左侧断边部分与敦煌遗书残片图像b恢复到真实物理尺寸后的左侧基准线的最小距离db,结合步骤I得到的每幅敦煌遗书残片图像恢复到真实物理尺寸后的网格单元宽度G′w,若mod((da+db),G′w)>N,则将时间序列匹配度s做运算s×n得到s′,然后将时间序列匹配度s′放入集合S中,否则直接将时间序列匹配度s放入集合S中;最后将集合S中的最大值作为敦煌遗书残片图像a和b之间的最大缀合度;If there is a scroll boundary in the upper and/or lower part of the Dunhuang suicide note fragment image a, and the Dunhuang suicide note fragment image b does not have a book scroll boundary, then put the Dunhuang suicide note fragment images a and b on the left and right sides of the virtual grid image, respectively, Align the horizontal grid lines of the scroll in the upper and/or lower part of the Dunhuang suicide note fragment image a with the corresponding grid horizontal lines in the virtual grid image respectively, and the left and right reference lines in the Dunhuang suicide note fragment images a and b are respectively aligned with Align the vertical grid lines in the virtual grid image; keep the Dunhuang suicide note fragment image a fixed in the virtual grid image, and align the broken edges of the Dunhuang suicide note fragment images a and b successively up and down; After aligning and bottom-aligning, first slide the Dunhuang suicide note fragment image b with M pixels as the stride in the vertical direction in the virtual grid image within the set sliding range, then return to the initial position, and finally move vertically along the vertical direction. Slide down within the set sliding range; at the initial position of the Dunhuang posthumous fragment image b and after each slide, calculate the normalized time-serialized edge curve data corresponding to the two Dunhuang posthumous fragment images a and b The time series matching degree s between the sub-curves T″ras and T″lbs of the overlapping parts of T″ra and T″lb in the vertical direction, the restoration of the right broken edge of the Dunhuang suicide note fragment image a and the Dunhuang suicide note fragment image a The maximum distance da to the right reference line after the real physical size and the minimum distance db between the broken edge part of the left side of the Dunhuang suicide note fragment image b and the left reference line after the Dunhuang suicide note fragment imageb is restored to the real physical size, combined The grid cell widthG'w after each piece of Dunhuang posthumous note image obtained in step 1 is restored to the real physical size, if mod((da +db ), G'w )>N, then the time series matching degree s Do the operation s×n to get s', then put the time series matching degree s' into the set S, otherwise put the time series matching degree s directly into the set S; finally, take the maximum value in the set S as the Dunhuang posthumous fragment image maximum degree of conjugation between a and b;若敦煌遗书残片图像a不存在书卷边界,敦煌遗书残片图像b的上部和/或下部存在书卷边界,则将敦煌遗书残片图像a和b分别放入虚拟栅格图像中的左侧和右侧,且使敦煌遗书残片图像b上部和/下部的书卷横向网格线分别与虚拟栅格图像中对应的网格横线对齐,敦煌遗书残片图像a和b中左侧及右侧的基准线分别与虚拟栅格图像中的网格竖线对齐;保持敦煌遗书残片图像b在虚拟栅格图像中固定不变,先后对敦煌遗书残片图像a和b的断边部分进行上对齐和下对齐;在上对齐和下对齐后,先将敦煌遗书残片图像a以M像素为步幅在虚拟栅格图像中沿竖直方向在设定的滑动范围内向上滑动,然后回到初始位置,最后沿竖直方向在设定的滑动范围内向下滑动;在敦煌遗书残片图像a的初始位置及每次滑动后,计算两幅敦煌遗书残片图像a和b对应的经归一化处理后的时间序列化边缘曲线数据T″ra和T″lb在竖直方向上重合部分的子曲线T″ras和T″lbs之间的时间序列匹配度s、敦煌遗书残片图像a右侧断边部分与敦煌遗书残片图像a恢复到真实物理尺寸后的右侧基准线的最大距离da以及敦煌遗书残片图像b左侧断边部分与敦煌遗书残片图像b恢复到真实物理尺寸后的左侧基准线的最小距离db,结合步骤I得到的每幅敦煌遗书残片图像恢复到真实物理尺寸后的网格单元宽度G′w,若mod((da+db),G′w)>N,则将时间序列匹配度s做运算s×n得到s′,然后将时间序列匹配度s′放入集合S中,否则直接将时间序列匹配度s放入集合S中;最后将集合S中的最大值作为敦煌遗书残片图像a和b之间的最大缀合度;If the Dunhuang suicide note fragment image a does not have a scroll boundary, and the upper and/or lower part of the Dunhuang suicide note fragment image b has a scroll boundary, then the Dunhuang suicide note fragment images a and b are placed on the left and right sides of the virtual grid image, respectively, And make the horizontal grid lines of the scroll in the upper and/or lower part of the image b of the Dunhuang suicide note to be aligned with the corresponding horizontal grid lines in the virtual grid image respectively, and the reference lines on the left and right in the images a and b of the Dunhuang suicide note fragment are respectively aligned with Align the vertical grid lines in the virtual grid image; keep the Dunhuang suicide note fragment image b fixed in the virtual grid image, and align the broken edges of the Dunhuang suicide note fragment images a and b successively up and down; After aligning and bottom-aligning, first slide the Dunhuang posthumous remnant image a with M pixels as the stride in the vertical direction in the virtual grid image within the set sliding range, then return to the initial position, and finally move along the vertical direction. Slide down within the set sliding range; at the initial position of the Dunhuang posthumous fragment image a and after each slide, calculate the normalized time-serialized edge curve data corresponding to the two Dunhuang posthumous fragment images a and b The time series matching degree s between the sub-curves T″ras and T″lbs of the overlapping parts of T″ra and T″lb in the vertical direction, the restoration of the right broken edge of the Dunhuang suicide note fragment image a and the Dunhuang suicide note fragment image a The maximum distance da to the right reference line after the real physical size and the minimum distance db between the broken edge part of the left side of the Dunhuang suicide note fragment image b and the left reference line after the Dunhuang suicide note fragment imageb is restored to the real physical size, combined The grid cell widthG'w after each piece of Dunhuang posthumous note image obtained in step 1 is restored to the real physical size, if mod((da +db ), G'w )>N, then the time series matching degree s Do the operation s×n to get s', then put the time series matching degree s' into the set S, otherwise put the time series matching degree s directly into the set S; finally, take the maximum value in the set S as the Dunhuang posthumous fragment image maximum degree of conjugation between a and b;若敦煌遗书残片图像a和b均不存在书卷边界,则将敦煌遗书残片图像a和b分别放入虚拟栅格图像中的左侧和右侧,敦煌遗书残片图像a和b中左侧及右侧的基准线分别与虚拟栅格图像中的网格竖线对齐;先后对敦煌遗书残片图像a和b的断边部分进行上对齐和下对齐;在上对齐和下对齐后,保持敦煌遗书残片图像a在虚拟栅格图像中固定不变,先将敦煌遗书残片图像b以M像素为步幅在虚拟栅格图像中沿竖直方向在设定的滑动范围内向上滑动,然后回到初始位置,最后沿竖直方向在设定的滑动范围内向下滑动;在敦煌遗书残片图像b的初始位置及每次滑动后,计算两幅敦煌遗书残片图像a和b对应的经归一化处理后的时间序列化边缘曲线数据T″ra和T″lb在竖直方向上重合部分的子曲线T″ras和T″lbs之间的时间序列匹配度s、敦煌遗书残片图像a右侧断边部分与敦煌遗书残片图像a恢复到真实物理尺寸后的右侧基准线的最大距离da以及敦煌遗书残片图像b左侧断边部分与敦煌遗书残片图像b恢复到真实物理尺寸后的左侧基准线的最小距离db,结合步骤I得到的每幅敦煌遗书残片图像恢复到真实物理尺寸后的网格单元宽度G′w,若mod((da+db),G′w)>N,则将时间序列匹配度s做运算s×n得到s′,然后将时间序列匹配度s′放入集合S中,否则直接将时间序列匹配度s放入集合S中;最后将集合S中的最大值作为敦煌遗书残片图像a和b之间的最大缀合度;If there is no book boundary in the images a and b of the Dunhuang suicide note fragments, put the images a and b of the Dunhuang suicide note fragments into the left and right sides of the virtual grid image, respectively, and put the images a and b of the Dunhuang suicide note fragments on the left and right sides respectively. The reference lines on the sides are respectively aligned with the vertical grid lines in the virtual grid image; the broken edge parts of the Dunhuang suicide note fragment images a and b are aligned up and down successively; after the top alignment and bottom alignment, keep the Dunhuang suicide note fragment Image a is fixed in the virtual grid image. First, the Dunhuang suicide note fragment image b is slid up in the vertical direction within the set sliding range in the virtual grid image with a stride of M pixels, and then returns to the initial position. , and finally slide down within the set sliding range in the vertical direction; at the initial position of the Dunhuang posthumous note fragment image b and after each slide, calculate the normalized corresponding to the two Dunhuang posthumous note fragment images a and b. The time series matching degree s between the sub-curves T″ras and T″lbs of the overlapping part of the time series edge curve data T″ra and T″lb in the vertical direction; The maximum distance of the right reference line after the Dunhuang suicide note fragment image a is restored to its real physical size da , and the left edge of the Dunhuang suicide note fragment image b and the Dunhuang suicide note fragment image b restored to the real physical size of the left reference line The minimum distance db , combined with the grid cell widthG′w after restoring each piece of Dunhuang posthumous fragments obtained in step I to the real physical size, if mod((da +db ), G′w )>N, then The time series matching degree s is calculated by s×n to get s′, and then the time series matching degree s′ is put into the set S, otherwise, the time series matching degree s is directly put into the set S; value as the maximum degree of conjugation between the images a and b of the Dunhuang suicide note fragment;虚拟栅格图像为人工设计的空白图像,虚拟栅格图像的水平长度不小于拟判定是否能够缀合的两幅敦煌遗书残片图像a和b的水平长度之和,竖直高度不小于拟判定是否能够缀合的两幅敦煌遗书残片图像a和b的竖直高度的较大值,且虚拟栅格图像内部均匀设置有网格,网格宽度与敦煌遗书残片图像恢复到真实物理尺寸后的网格单元宽度相等,网格高度与完整文书图像恢复到真实物理尺寸后上部和下部的书卷横向网格线之间的距离相等,M=1,mod()为求余函数,N为栅格宽度阈值,n为惩罚系数,滑动范围为敦煌遗书残片图像a和b的断边部分在上对齐和下对齐后,从对齐点的上端P像素至下端P像素。The virtual grid image is an artificially designed blank image. The horizontal length of the virtual grid image is not less than the sum of the horizontal lengths of the two Dunhuang posthumous fragments images a and b to be judged whether they can be combined, and the vertical height is not less than The larger value of the vertical heights of the two images a and b of the Dunhuang suicide note fragments that can be combined, and the grid is evenly set inside the virtual grid image, the grid width and the image of the Dunhuang suicide note fragments are restored to the real physical size The width of the grid cells is equal, and the height of the grid is the same as the distance between the horizontal grid lines of the upper and lower volumes after the complete document image is restored to the real physical size. M=1, mod() is the remainder function, and N is the grid width. Threshold, n is the penalty coefficient, and the sliding range is from the upper end P pixel to the lower end P pixel of the alignment point after the broken edges of the images a and b of the Dunhuang posthumous fragments are aligned up and down.10.根据权利要求8所述的敦煌遗书残片图像的自动缀合方法,其特征在于:步骤K包括以下具体步骤:10. The method for automatic conjugation of images of Dunhuang suicide note fragments according to claim 8, wherein step K comprises the following specific steps:K0:判断拟判定是否能够缀合的两幅敦煌遗书残片图像a和b的上部和/或下部是否存在书卷边界,若敦煌遗书残片图像a和b均不存在书卷边界则进入步骤K1;若仅有敦煌遗书残片图像a的上部和/或下部存在书卷边界则进入步骤K2;若仅有敦煌遗书残片图像b的上部和/或下部存在书卷边界则进入步骤K3,若敦煌遗书残片图像a的上部和/或下部存在书卷边界,且敦煌遗书残片图像b的上部和/或下部存在书卷边界则进入步骤K4;K0: Determine whether there is a scroll boundary in the upper and/or lower parts of the two images a and b of the Dunhuang posthumous note fragments to be combined. If there is no scroll boundary in the images a and b of the Dunhuang posthumous note fragments, go to step K1; If there is a scroll boundary in the upper and/or lower part of the Dunhuang suicide note fragment image a, then go to step K2; if only the upper and/or lower part of the Dunhuang suicide note fragment image b has a scroll boundary, then go to step K3, if the upper part of the Dunhuang suicide note fragment image a has a scroll boundary And/or there is a scroll boundary in the lower part, and if there is a scroll boundary in the upper and/or lower part of the Dunhuang posthumous note fragment image b, then proceed to step K4;K1:创建虚拟栅格图像,将敦煌遗书残片图像a和b分别放入虚拟栅格图像中的左侧和右侧,将敦煌遗书残片图像a和b中左侧及右侧的基准线分别与虚拟栅格图像中的网格竖线对齐,将敦煌遗书残片图像b的断边部分的上端点与敦煌遗书残片图像a的断边部分的上端点对齐;然后进入步骤K5;K1: Create a virtual grid image, place the Dunhuang suicide note fragment images a and b on the left and right sides of the virtual grid image, respectively, and place the left and right baselines in the Dunhuang suicide note fragment images a and b with the The vertical grid lines in the virtual grid image are aligned, and the upper end point of the broken edge part of the Dunhuang suicide note fragment image b is aligned with the upper end point of the broken edge part of the Dunhuang suicide note fragment image a; then go to step K5;K2:创建虚拟栅格图像,将敦煌遗书残片图像a和b分别放入虚拟栅格图像中的左侧和右侧,将敦煌遗书残片图像a上部和/或下部的书卷横向网格线分别与虚拟栅格图像中对应的网格横线对齐,敦煌遗书残片图像a和b中左侧及右侧的基准线分别与虚拟栅格图像中的网格竖线对齐,将敦煌遗书残片图像b的断边部分的上端点与敦煌遗书残片图像a的断边部分的上端点对齐;然后进入步骤K5;K2: Create a virtual grid image, place the Dunhuang suicide note fragment images a and b on the left and right sides of the virtual grid image respectively, and place the scroll horizontal grid lines on the upper and/or lower part of the Dunhuang suicide note fragment image a with the The corresponding grid horizontal lines in the virtual grid image are aligned, and the reference lines on the left and right sides of the Dunhuang suicide note fragment images a and b are respectively aligned with the grid vertical lines in the virtual grid image. The upper endpoint of the broken edge portion is aligned with the upper endpoint of the broken edge portion of the Dunhuang suicide note fragment image a; then enter step K5;K3:创建虚拟栅格图像,将敦煌遗书残片图像a和b分别放入虚拟栅格图像中的左侧和右侧,将敦煌遗书残片图像b上部和/或下部的书卷横向网格线分别与虚拟栅格图像中对应的网格横线对齐,敦煌遗书残片图像a和b中左侧及右侧的基准线分别与虚拟栅格图像中的网格竖线对齐,将敦煌遗书残片图像a的断边部分的上端点与敦煌遗书残片图像b的断边部分的上端点对齐;然后进入步骤K5;K3: Create a virtual grid image, place the Dunhuang suicide note fragment images a and b on the left and right sides of the virtual grid image respectively, and place the scroll horizontal grid lines on the upper and/or lower part of the Dunhuang suicide note fragment image b with the The corresponding grid horizontal lines in the virtual grid image are aligned, and the left and right reference lines in the Dunhuang suicide note fragment images a and b are respectively aligned with the grid vertical lines in the virtual grid image. The upper end point of the broken edge part is aligned with the upper end point of the broken edge part of the Dunhuang posthumous note fragment image b; then enter step K5;K4:创建虚拟栅格图像,将敦煌遗书残片图像a和b分别放入虚拟栅格图像中的左侧和右侧,且使敦煌遗书残片图像a和b中存在的书卷横向网格线分别与虚拟栅格图像中对应的网格横线对齐,敦煌遗书残片图像a和b中左侧及右侧的基准线分别与虚拟栅格图像中的网格竖线对齐;然后进入步骤K5;K4: Create a virtual grid image, place the images a and b of the Dunhuang suicide note fragments on the left and right sides of the virtual grid image respectively, and make the horizontal grid lines of the scrolls in the images a and b of the Dunhuang suicide note fragments correspond to The corresponding grid horizontal lines in the virtual grid image are aligned, and the left and right reference lines in the Dunhuang suicide note fragment images a and b are respectively aligned with the grid vertical lines in the virtual grid image; then go to step K5;K5:在敦煌遗书残片图像b位于虚拟栅格图像中的当前位置下,计算敦煌遗书残片图像a和b对应的经归一化处理后的时间序列化边缘曲线数据T″ra和T″lb在竖直方向上重合部分的子曲线T″ras和T″lbs之间的时间序列匹配度s,然后进入步骤K6;K5: Calculate the normalized time-serialized edge curve data T″ra and T″lb corresponding to the Dunhuang suicide note fragment images a and b under the current position of the Dunhuang suicide note fragment image b in the virtual grid image. The time series matching degree s between the sub-curves T″ras and T″lbs of the overlapping parts in the vertical direction, and then enter step K6;K6:将归一化处理后的时间序列化边缘曲线数据T″ra和T″lb的各自的首尾端点直接连线,令所形成的线段长度分别为La和Lb,max(La,Lb)为两者中的较大值;然后将T″lbs的首尾端点直接连线,令所形成的线段长度为Lc;若Lc大于等于长度阈值,进入步骤K7;否则,进入步骤K8;K6: Directly connect the respective head and tail endpoints of the normalized time-serialized edge curve data T″ra and T″lb , and let the lengths of the formed line segments beLa and L brespectively , max(La , Lb ) is the larger value of the two; then the head and tail end points of T″lbs are directly connected, and the length of the formed line segment is Lc ; if Lc is greater than or equal to the length threshold, enter step K7; otherwise, enter step K8;K7:首先计算子曲线T″ras和T″lbs之间的时间序列匹配度s:计算子曲线T″ras和T″lbs在每个对应位置的横坐标的数据差值,将所得到数据差值按顺序组成差值数组d,统计差值数组d中值小于等于差值阈值的元素个数记为tc;结合步骤K6得到的线段长度Lc,计算得出时间序列匹配度s=tc/LcK7: First calculate the time series matching degree s between the sub-curves T″ras and T″lbs : Calculate the data difference of the abscissas of the sub-curves T″ras and T″lbs at each corresponding position, and calculate the difference between the obtained data The values form the difference array d in order, and the number of elements whose value is less than or equal to the difference threshold in the statistical difference array d is denoted as tc ; in combination with the line segment length Lc obtained in step K6, the time series matching degree s=t is calculated.c /Lc ;然后计算敦煌遗书残片图像a右侧断边部分与敦煌遗书残片图像a恢复到真实物理尺寸后的右侧基准线的最大距离da,以及敦煌遗书残片图像b左侧断边部分与敦煌遗书残片图像b恢复到真实物理尺寸后的左侧基准线的最小距离dbThen calculate the maximum distance da between the right broken edge of the Dunhuang suicide note fragment image a and the right reference line after the Dunhuang suicide note fragment image a is restored to its true physical size, and the left broken edge of the Dunhuang suicide note fragment image b and the Dunhuang suicide note fragment The minimum distance db of the left reference line after the image b is restored to the real physical size;最后结合步骤I得到的每幅敦煌遗书残片图像恢复到真实物理尺寸后的网格单元宽度G′w,若mod((da+db),G′w)>N,则将时间序列匹配度s做运算s×n得到s′,然后将时间序列匹配度s′放入集合S中,否则直接将时间序列匹配度s放入集合S中;然后进入步骤K9;Finally, the grid cell widthG′w after restoring each piece of Dunhuang posthumous note image obtained in step I to the real physical size, if mod((da +db ), G′w )>N, then the time series is matched The degree s is calculated by s×n to get s', and then the time series matching degree s' is put into the set S, otherwise, the time series matching degree s is directly put into the set S; then go to step K9;K8:将子曲线T″ras和T″lbs之间的时间序列匹配度s置为0,并将s的值放入集合S中;然后进入步骤K9;K8: Set the time series matching degree s between the sub-curves T″ras and T″lbs to 0, and put the value of s into the set S; then go to step K9;K9:若敦煌遗书残片图像a的上部和/或下部存在书卷边界,且敦煌遗书残片图像b的上部和/或下部存在书卷边界,则进入步骤L;若仅有敦煌遗书残片图像b的上部和/或下部存在书卷边界,则进入步骤K13;若敦煌遗书残片图像b不存在书卷边界则进入步骤K10;K9: If there is a scroll boundary in the upper and/or lower part of the image a of the Dunhuang suicide note fragment, and there is a scroll boundary in the upper and/or lower part of the image b of the Dunhuang suicide note fragment, then go to step L; if there are only the upper and/or lower part of the image b of the Dunhuang suicide note fragment / or if there is a scroll boundary in the lower part, then go to step K13; if there is no scroll boundary in the image b of the Dunhuang posthumous note fragment, then go to step K10;K10:以1像素为步幅,将敦煌遗书残片图像b在虚拟栅格图像中以敦煌遗书残片图像a断边部分的上端点为参照点分别向上及向下滑动,滑动范围不超过敦煌遗书残片图像a断边部分的上端点上下各P像素;在每次移动后均重复步骤K5至K8,直至敦煌遗书残片图像b在虚拟栅格图像中滑动至滑动范围的边界,即图像b的断边部分的上端点分别与滑动范围上下两端的两个像素点重合;然后进入步骤K11;K10: Take 1 pixel as a step, slide the Dunhuang suicide note fragment image b up and down in the virtual grid image with the upper end point of the broken edge of the Dunhuang suicide note fragment image a as the reference point, respectively, and the sliding range does not exceed the Dunhuang suicide note fragment The upper and lower end points of the broken edge part of image a are P pixels up and down; steps K5 to K8 are repeated after each movement, until the Dunhuang suicide note fragment image b slides to the boundary of the sliding range in the virtual grid image, that is, the broken edge of image b The upper end point of the part coincides with the two pixel points at the upper and lower ends of the sliding range respectively; then enter step K11;K11:维持敦煌遗书残片图像a和b分别位于虚拟栅格图像中的左侧和右侧,且敦煌遗书残片图像a和b中左侧与右侧的基准线位置与虚拟栅格图像中的网格竖线对齐,然后将敦煌遗书残片图像a的断边部分的下端点与敦煌遗书残片图像b的断边部分的下端点对齐,再依次执行步骤K5至K8;步骤K8执行完毕后直接进入步骤K12;K11: Keep the Dunhuang suicide note fragment images a and b located on the left and right sides of the virtual grid image, respectively, and the left and right baseline positions in the Dunhuang suicide note fragment images a and b are the same as the grid in the virtual grid image. Align the vertical lines, then align the lower end point of the broken edge part of the Dunhuang suicide note fragment image a with the lower end point of the broken edge part of the Dunhuang suicide note fragment image b, and then perform steps K5 to K8 in sequence; After the execution of step K8, directly enter the step K12;K12:以1像素为步幅,将敦煌遗书残片图像b在虚拟栅格图像中以敦煌遗书残片图像a断边部分的下端点为参照点分别向上及向下滑动,且滑动范围不超过汉简图像a断边部分的下端点上下各P像素;每次移动后均重复步骤K5至K8,直至敦煌遗书残片图像b在虚拟栅格图像中滑动至滑动范围的边界,即图像b的断边部分的下端点分别与滑动范围上下两端的两个像素点重合;然后进入步骤L;K12: Take 1 pixel as a step, slide the Dunhuang suicide note fragment image b in the virtual grid image with the lower end point of the broken edge of the Dunhuang suicide note fragment image a as the reference point respectively upward and downward, and the sliding range does not exceed the Han bamboo slips The lower endpoint of the broken edge part of image a is P pixels up and down; steps K5 to K8 are repeated after each movement, until the Dunhuang posthumous note fragment image b slides to the boundary of the sliding range in the virtual grid image, that is, the broken edge part of image b The lower endpoints of , respectively, coincide with the two pixels at the upper and lower ends of the sliding range; then enter step L;K13:以1像素为步幅,将敦煌遗书残片图像a在虚拟栅格图像中以敦煌遗书残片图像b断边部分的上端点为参照点分别向上及向下滑动,且滑动范围不超过汉简图像b断边部分的上端点上下各P像素,每次移动后均重复步骤K5至K8,直至敦煌遗书残片图像a在虚拟栅格图像中滑动至滑动范围的边界,即图像a的断边部分的上端点分别与滑动范围上下两端的两个像素点重合;然后进入步骤K14;K13: Take 1 pixel as a step, slide the Dunhuang suicide note fragment image a in the virtual grid image and take the upper end point of the broken edge of the Dunhuang suicide note fragment image b as the reference point respectively upward and downward, and the sliding range does not exceed the Han bamboo slips The upper and lower P pixels of the broken edge part of image b, repeat steps K5 to K8 after each movement, until the Dunhuang suicide note fragment image a slides to the boundary of the sliding range in the virtual grid image, that is, the broken edge part of image a The upper endpoints of , respectively, coincide with the two pixels at the upper and lower ends of the sliding range; then enter step K14;K14:维持敦煌遗书残片图像a和b分别位于虚拟栅格图像中的左侧和右侧,敦煌遗书残片图像a和b中左侧与右侧的基准线位置与虚拟栅格图像中的网格竖线对齐,且敦煌遗书残片图像b上部及下部的书卷边界分别与虚拟栅格图像上部及下部的网格横线对齐,然后将敦煌遗书残片图像a的断边部分的下端点与敦煌遗书残片图像b的断边部分的下端点对齐,再依次执行步骤K5至K8;步骤K8执行完毕后直接进入步骤K15;K14: Keep the Dunhuang suicide note fragment images a and b located on the left and right in the virtual grid image, respectively, and the left and right baseline positions in the Dunhuang suicide note fragment images a and b and the grid in the virtual grid image. The vertical lines are aligned, and the upper and lower scroll boundaries of the Dunhuang suicide note fragment image b are respectively aligned with the grid horizontal lines at the upper and lower parts of the virtual grid image, and then the lower end of the broken edge of the Dunhuang suicide note fragment image a is aligned with the Dunhuang suicide note fragment. The lower endpoints of the broken edge portion of the image b are aligned, and then steps K5 to K8 are executed in turn; directly enter step K15 after step K8 is executed;K15:以1像素为步幅,将敦煌遗书残片图像a在虚拟栅格图像中以敦煌遗书残片图像b断边部分的下端点为参照点分别向上及向下滑动,且滑动范围不超过汉简图像b断边部分的上端点上下各P像素;每次移动后均重复步骤K5至K8,直至敦煌遗书残片图像a在虚拟栅格图像中滑动至滑动范围的边界,即图像a的断边部分的上端点分别与滑动范围上下两端的两个像素点重合;然后进入步骤L。K15: Take 1 pixel as the stride, slide the Dunhuang suicide note fragment image a in the virtual grid image and take the lower end point of the broken edge of the Dunhuang suicide note fragment image b as the reference point respectively upward and downward, and the sliding range does not exceed the Han bamboo slips The upper and lower P pixels of the upper end point of the broken edge part of image b; steps K5 to K8 are repeated after each movement, until the Dunhuang suicide note fragment image a slides to the boundary of the sliding range in the virtual grid image, that is, the broken edge part of image a The upper endpoint of , respectively, coincides with the two pixel points at the upper and lower ends of the sliding range; then go to step L.
CN202110440552.4A2021-04-232021-04-23 An automatic conjugation method for images of Dunhuang posthumous fragmentsActiveCN112991185B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110440552.4ACN112991185B (en)2021-04-232021-04-23 An automatic conjugation method for images of Dunhuang posthumous fragments

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110440552.4ACN112991185B (en)2021-04-232021-04-23 An automatic conjugation method for images of Dunhuang posthumous fragments

Publications (2)

Publication NumberPublication Date
CN112991185Atrue CN112991185A (en)2021-06-18
CN112991185B CN112991185B (en)2022-09-13

Family

ID=76340007

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110440552.4AActiveCN112991185B (en)2021-04-232021-04-23 An automatic conjugation method for images of Dunhuang posthumous fragments

Country Status (1)

CountryLink
CN (1)CN112991185B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102087742A (en)*2011-01-262011-06-08王爱民Tortoise shell fragment conjugating method based on image processing
CN111292283A (en)*2020-01-212020-06-16河南大学Carapace bone fragment conjugation method based on time sequence similarity calculation
CN111951152A (en)*2020-08-102020-11-17河南大学 An oracle-bone bonding method that takes into account both the continuity of the original edge and the matching degree of the broken edge and the ballast mouth
US20200382702A1 (en)*2018-02-222020-12-03Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Generating Panoramic Images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102087742A (en)*2011-01-262011-06-08王爱民Tortoise shell fragment conjugating method based on image processing
US20200382702A1 (en)*2018-02-222020-12-03Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Generating Panoramic Images
CN111292283A (en)*2020-01-212020-06-16河南大学Carapace bone fragment conjugation method based on time sequence similarity calculation
CN111951152A (en)*2020-08-102020-11-17河南大学 An oracle-bone bonding method that takes into account both the continuity of the original edge and the matching degree of the broken edge and the ballast mouth

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
WU YOUGUANG 等: "Color and Contour Based Reconstruction of Fragmented image", 《IEEE XPLORE》*
刘华 等: "敦煌学数字图书馆遗书元数据标准的设计与结构", 《上海交通大学学报》*
莫伯峰 等: "宾组甲骨新缀五则及考释", 《华夏考古》*
韩嘉楠: "敦煌遗书缀残中的相关残片检索技术研究及系统实现", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑(月刊)》*

Also Published As

Publication numberPublication date
CN112991185B (en)2022-09-13

Similar Documents

PublicationPublication DateTitle
CN102800148B (en)RMB sequence number identification method
CN106156761B (en)Image table detection and identification method for mobile terminal shooting
CN110781877B (en)Image recognition method, device and storage medium
CN102867313B (en)Visual saliency detection method with fusion of region color and HoG (histogram of oriented gradient) features
CN104463195A (en)Printing style digital recognition method based on template matching
CN110597806A (en) A system and method for wrong question set generation and answer statistics based on review recognition
CN108960382A (en)A kind of colour barcode and its color calibration method
CN114119585B (en)Method for identifying key feature enhanced gastric cancer image based on Transformer
CN103793702A (en)Pedestrian re-identifying method based on coordination scale learning
CN112634125B (en)Automatic face replacement method based on off-line face database
CN107527054B (en) Foreground automatic extraction method based on multi-view fusion
CN105426825A (en)Aerial image identification based power grid geographical wiring diagram drawing method
CN108090485A (en)Display foreground extraction method based on various visual angles fusion
CN107066972A (en)Natural scene Method for text detection based on multichannel extremal region
CN108052936B (en) A method and system for automatic tilt correction of braille images
CN109034154A (en)The extraction and recognition methods of Invoice Seal duty paragraph
CN111292283B (en) A conjugation method of oracle bone fragments based on time series similarity calculation
CN101458767B (en) Recognition method of handwritten digits on test paper
CN110222217A (en)A kind of shoes watermark image search method based on sectionally weighting
CN110991434B (en) Self-service terminal certificate identification method and device
CN111612011A (en) A clothing color extraction method based on human semantic segmentation
CN111951152B (en) An oracle bone fusion method that takes into account the continuity of the original side and the matching degree of the broken side ballast
CN112991185A (en)Automatic conjugation method for Dunhuang relic image
CN112837334B (en) A method of automatic conjugation of Chinese and Jane images
CN115527102A (en)Fish species identification method and system based on contour key points and attention mechanism

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp