Detailed Description
The solution according to the application will now be described more fully hereinafter with reference to the accompanying drawings, in which examples are shown, by way of illustration only, and not in any way limiting, embodiments of the application, on the basis of which those skilled in the art may obtain solutions without making any inventive faculty.
Referring to fig. 1, in the stage of integrated circuit fabrication, a photoresist is coated on a wafer surface, and then the photoresist is exposed through a photomask. Followed by post exposure bake. For positive-working chemical multiplication photoresists, this will trigger a deprotection reaction, making the photoresist in the exposed areas more soluble to the developer, so that the photoresist in the exposed areas can be removed during subsequent development to produce the desired photoresist pattern. The post-development detection is then performed. The detection during the process includes, for example, electron microscopy or optical measurement of overlay differences between the front and back layer patterns to determine whether they meet the specification. If the specifications are met, a subsequent process is performed to transfer the desired pattern onto the wafer.
Referring to fig. 1, efficient and accurate detection is a measure for smooth progress of mass production of semiconductors, and detection plays a vital role in monitoring and preventing deviations in processes including photolithography, grinding, etching, and the like. Overlay differential detection as described in the context of the present application provides a solution in applications and related processes for large scale integrated circuit production.
Referring to fig. 1, one exemplary type of overlay object is illustrated: i.e. a Box-in-Box (Box in Box). However, the types of overlay marks used in the actual production of semiconductors are diversified, such as inner and outer bars (Bar in Bar) or a frame center (FRAME IN FRAME) with more common overlay marks, which are not enumerated here. The box-in-box labels include outboard box labels (outside boxes) and inboard box labels (inside boxes) at different levels, the outboard box labels may also be referred to as outboard box labels and the inboard box labels may also be referred to as inboard box labels. The following text application illustrates the measurement of the grey scale variation of a wafer overlay object with a box-in-box type structure as a representation of the overlay object. The following text describes a method, a device or a system for measuring the gray value variation of a wafer overlay object.
Referring to fig. 1, the advantage of the overlay objects such as the inner Bar (Bar in Bar) and the middle frame (FRAME IN FRAME) is that the lower layer mark can independently define the center of the lower layer mark depending on the front, back, left and right marks of the overlay object, and similarly, the upper layer mark can independently define the center of the upper layer mark depending on the front, back, left and right marks of the overlay object. The central positions of the front and rear layers can be directly compared to determine whether the front and rear layers are overlapped to meet the alignment requirement. The terms inner and outer bar and frame, etc. are relatively common in the field of integrated circuits, and are not separately described herein.
Referring to fig. 1, the front and rear two different levels of the inner and outer bars (Bar in Bar) and the frame middle frame (FRAME IN FRAME) are respectively provided with front and rear, left and right marks, while the Box in Box has no similar arrangement. The conventional overlay measurement method of the inner and outer bar-shaped and frame-in-frame is not suitable for a Box-in-Box (Box in Box) in many fields, and two sides of each independent mark in the front, back, left and right marks of the inner and outer bar-shaped or frame-in-frame have nearly symmetrical pixel value distribution, but the Box-in-Box does not have the front, back, left and right marks and also has no proper symmetrical pixel value distribution to calculate the center position.
Referring to fig. 1, for an overlay object, there is already a correlation between its pixel gray scale and overlay error in the prior art and an overlay value is obtained. For example, patent application CN110865518a claims a wafer stack-up alignment method that compares pattern profile data with layout data and includes the steps of: the method comprises the steps of scanning a graph on the surface of a wafer, obtaining a gray value of the graph through calculation, fitting according to the gray value of the graph and edges of layout data, obtaining fitting coefficients according to an iterative algorithm of the transformation coefficients, and accordingly carrying out gray related transformation according to the fitting coefficients, and finally obtaining a gray image, namely an extracted vector outline graph. Further, as in the method, electronic device and computer-readable storage medium of patent application CN112015056a, the overlay bias value can be corrected; various documents such as a method for detecting overlay deviation between a contact hole and a polysilicon pattern in patent application CN112635344a relate to evaluating overlay error using gray scale. How to evaluate the gray value of the overlay object with high precision and high reliability, which can be suitable for the gray measurement in the special field of integrated circuits, is a difficult problem.
Referring to fig. 1, in an alternative embodiment, a wafer 10 includes overlay objects such as bar marks 12 and 14. The overlay object in the example, such as bar code 12, is a longitudinally extending elongated structure, and bar code 14 is a laterally extending elongated structure, for example. Alternatively, the opposite can be assumed in the example, for example, the strip-shaped marking 12 is a strip-shaped structure extending in the transverse direction, and the marking 14 is a strip-shaped structure extending in the longitudinal direction, for example. The original transverse bar mark becomes a longitudinal bar mark and the original longitudinal bar mark becomes a transverse bar mark when the overlay object rotates ninety degrees.
Referring to fig. 1, regarding the overlay: when a conventional photoetching machine works, all the fields (fields) on a wafer are exposed one by one, and then the wafer is replaced until all the wafers (wafer) are exposed; it should be noted that after the process is completed on the wafer, the mask is replaced, and then the second layer pattern is exposed on the wafer (wafer), that is, the repeated exposure is performed. The pattern of the second layer mask must be accurately nested with the pattern of the first layer mask, and is called overlay.
Referring to fig. 1, whether or not both a first layer (e.g., a previous layer or a next layer) and a second layer (e.g., a next layer or a previous layer) are registered may rely on registration marks (e.g., marks 12/14, etc.) to detect registration differences.
Referring to fig. 1, regarding the overlay mark or overlay mark: patterns on a wafer (wafer) that are specifically used to measure overlay differences or overlay differences are referred to as overlay marks that have been placed in designated areas, such as typically at dicing streets of the wafer (a wafer is ultimately diced into thousands or more dies, dicing streets are reserved for die dicing, typically only tens of microns wide), when designing a mask.
Referring to fig. 1, the international semiconductor technology roadmap (ITRS) puts requirements on overlay differences for each technology node's lithography process, such as the 3 sigma of DRAM from around 7.1 nm in the early stage to around 2 nm in the current stage, and further such as the 3 sigma of logic devices from around 7.6 nm in the early stage to around 1.9 nm in the current stage, and the 3 sigma of Flash devices from around 7.2 nm to around 2.6 nm in the current stage. Overlay variation OVLE (3σ) is typically reduced by the cooperation of the lithography machine alignment system and overlay difference measurement apparatus, and alignment correction software. In order to ensure that the circuits of the upper and lower layers of the chip design can be reliably connected, the alignment deviation between a certain endpoint in the current layer and a corresponding endpoint in the reference layer must be less than one third of the minimum pitch of the pattern. According to the development trend, with the advancement of technology nodes, the allowable alignment deviation, i.e. overlay difference, of the critical lithography layer is scaled down year by year, so that how to quickly and accurately give the overlay difference under the condition of smaller technology nodes is one of the technical problems to be solved.
Referring to fig. 1, there are many reasons for misalignment of the exposure pattern with the reference pattern (the center position of the mark 12/14 of the previous layer on the current layer is not aligned with the center position of the mark 12/14 of the next layer on the reference layer, i.e., the overlay difference is not zero). Mask distortion or scaling anomalies, wafer distortion, distortion of the projection lens system of the lithographic apparatus, non-uniformity of wafer stage movement, etc. all introduce misalignment, so it is extremely important to measure overlay variations in the semiconductor or integrated circuit industry.
Referring to fig. 1, reference is made to identification. In the operation of the lithography machine, all the fields (fields) on the wafer (wafer) are exposed one by one, and then the wafer (wafer) is replaced until all the wafers (wafer) are exposed, which may be performed by exposing a first layer of pattern on the wafer, i.e. lithography, to transfer the first layer of circuit pattern onto the wafer. It should be noted that when the first layer of the wafer is processed, the mask is replaced and then a second layer of the pattern is exposed on the wafer (wafer), i.e. a repeated exposure is performed. The first layer includes the designed integrated circuit or electronic component or wiring, etc., and the second layer includes the designed integrated circuit or electronic component or wiring. Based on ensuring that the circuits of the upper and lower layers in the chip design can be reliably connected (e.g., the electrical connection of the circuits of the first layer and the circuits of the second layer), the misalignment between a certain endpoint in the current layer or the subsequent layer or the second layer and a corresponding endpoint in the reference layer or the previous layer or the first layer is, for example, less than one third of the minimum pitch of the pattern. According to the development trend of the semiconductor industry, with the forward progress of technology nodes, the allowable alignment deviation, i.e. the overlay difference, of a key photoetching layer is scaled down year by year, and under the condition of smaller technology nodes, how to quickly and accurately give the overlay difference is one of the technical problems to be solved.
Referring to FIG. 2, in an alternative embodiment, image area 12A has a height H1 and a width W1. This image area may be an image area 12A obtained by photographing a partial area or position of the bar code 12. The image region shows pixels, such as pixels X0, X1, … … XN, on the abscissa, and each abscissa corresponds to a total of M abscissas. Gray scale compression (as indicated by the arrow) is performed for each column of pixels.
Referring to fig. 2, in an alternative embodiment, the image area 12A has a height H1 and a width W1, and if different heights H1 and widths W1 are selected, i.e. the values of H1 and W1 are adjusted, the natural number N of XN is also different and the gray distribution and gray characteristics of the image area 12A are changed. Based on the purpose of satisfying the gray value variation measurement of the overlay object on the wafer 10, it is allowed to select and adjust reasonable H1 and W1. Let H1 > W1.
Referring to fig. 3, in an alternative embodiment, the first and second types of coordinates are abscissa and ordinate, respectively. Then in the example all the abscissas corresponding to the abscissa X0 are Y0, Y1, … … YM. The coordinates of the M pixels corresponding to the abscissa X0 are (X0, Y0), (X0, Y1), (X0, Y2) … … (X0, YM), respectively, M being a natural number.
Referring to fig. 3, in an alternative embodiment, the coordinates of M pixels corresponding to a similar abscissa X1 are (X1, Y0), (X1, Y1), (X1, Y2) … … (X1, YM), respectively, as described above.
Referring to fig. 3, in an alternative embodiment, the coordinates of M pixels corresponding to a similar abscissa X2 are (X2, Y0), (X2, Y1), (X2, Y2) … … (X2, YM), respectively, as described above.
Referring to fig. 3, in an alternative embodiment, the coordinates of the M pixels corresponding to the similar abscissa XN are (XN, Y0), (XN, Y1), (XN, Y2) … … (XN, YM), respectively, as described above.
Referring to fig. 3, in an alternative embodiment, gray values of respective pixels of an image area 12A of an overlay object, such as 12 or 14, are recorded, and a set of gray values of all second-type coordinates corresponding to any first-type coordinate is extracted under any first-type coordinate of the image area 12A. The Set may be denoted as DS (Data Set abbreviated DS).
Referring to fig. 3, in an alternative embodiment, the gray values of the individual pixels of the image area 12A of the overlay object bar code 12 are recorded. A first set (a vertical set) of gray values of all the second type coordinates (Y0 to YM) corresponding to any one of the first type coordinates X0 is extracted under the first type coordinates, such as X0, of the image area 12A.
Referring to fig. 3, the first set includes pixel points (X0, Y0), (X0, Y1), … … (X0, YM) having their respective gray values, and a series of gray values of M pixel points constitute the first set.
Referring to fig. 3, the Gray values (e.g., the first set) of all the second type coordinates Y0 to YM corresponding to the first type coordinate X0 are compressed, and an equivalent Gray gray_x0 is used to represent the compressed Gray values (e.g., the first set) of all the second type coordinates corresponding to any one of the first type coordinates X0.
Referring to fig. 3, in an alternative embodiment, the gray values of the individual pixels of the image area 12A of the overlay object bar code 12 are recorded. The second set (vertical set) of gray values of all the second type coordinates (Y0 to YM) corresponding to any one of the first type coordinates X1 is extracted under the first type coordinates, such as X1, of the image area 12A.
Referring to fig. 3, the second set includes pixel points (X1, Y0), (X1, Y1), … … (X1, YM) having their respective gray values, and a series of gray values of M pixel points constitute the second set.
Referring to fig. 3, the Gray values (e.g., the second set) of all the second type coordinates Y0 to YM corresponding to the first type coordinate X1 are compressed, and the compressed Gray values (e.g., the second set) of all the second type coordinates corresponding to any one of the first type coordinates X1 are represented by an equivalent gray_x1.
Referring to fig. 3, in an alternative embodiment, the gray values of the individual pixels of the image area 12A of the overlay object bar code 12 are recorded. A third set (belonging to the vertical set) of gray values of all the second type coordinates (Y0 to YM) corresponding to any one of the first type coordinates X2 is extracted under the first type coordinates, such as X2, of the image area 12A.
Referring to fig. 3, the third set includes pixel points (X2, Y0), (X2, Y1), … … (X2, YM) having their respective gray values, and a series of gray values of M pixel points constitute the third set.
Referring to fig. 3, the Gray values (e.g., the third set) of all the second type coordinates Y0 to YM corresponding to the first type coordinate X2 are compressed, and the compressed Gray values (e.g., the third set) of all the second type coordinates corresponding to any one of the first type coordinates X2 are represented by an equivalent gray_x2.
Referring to fig. 3, in an alternative embodiment, the gray values of the individual pixels of the image area 12A of the overlay object bar code 12 are recorded. An n+1st set (vertical set) of gray values of all the second type coordinates (Y0 to YM) corresponding to any one of the first type coordinates XN is extracted under the first type coordinates such as XN in the image area 12A.
Referring to fig. 3, the n+1th set contains pixels (XN, Y0), (XN, Y1), … … (XN, YM) and their respective gray values, and a series of gray values for M pixel points constitute an n+1th set.
Referring to fig. 3, the Gray values (for example, the n+1th set) of all the second type coordinates Y0 to YM corresponding to the first type coordinates XN are compressed, and the compressed values of the Gray values (for example, the n+1th set) of all the second type coordinates corresponding to any one of the first type coordinates XN are represented by an equivalent gray_xn.
Referring to fig. 3, gray_x0 corresponding to X0, gray_x1 corresponding to X1, gray_x2 corresponding to X2, and gray_x3 corresponding to X3, … … XN corresponding to X2 are combined into an array. One of the uses of the array is to determine the gray value change of an overlay object, such as the image region 12A, and thereby determine the dynamic change of the brightness of the image region 12A along the first type of coordinate direction. Namely, the measurement of the gray value change of the overlay object is realized.
Referring to fig. 3, gray_x0 to gray_xn constitute an array. The Array is denoted as AR (Array for short).
Referring to fig. 3, in an alternative embodiment, the gray value of any gray level appearing in the set is multiplied by the number of the any gray level appearing in the set, and the gray value is taken as the gray level product corresponding to the any gray level; all gray scale products generated in the set are added to obtain a sum value, and the sum value is divided by the total number of all the second type coordinates corresponding to any first type coordinate to obtain an equivalent gray scale corresponding to any first type coordinate (taking multiple columns of pixel points as an example).
Referring to fig. 3, regarding gray scale compression, the n+1th set is taken as an example: the gray value of any gray level appearing in the n+1th set is, for example, 250 (typically, between 0 and 255) multiplied by the number of occurrences of any gray level 250 in the set (assuming that the number of occurrences is K), and the gray level product is 250×k, where K is a positive integer.
Referring to fig. 3, regarding gray scale compression, the n+1th set is taken as an example: the gray value of any gray level appearing in the n+1th set is, for example, 199 (typically between 0 and 255) multiplied by the number of occurrences of the any gray level 199 in the set (assuming that the number of occurrences is Q), and is taken as the gray level product 199×q corresponding to the any gray level, where Q is a positive integer.
Referring to fig. 3, regarding gray scale compression, the n+1th set is taken as an example: the gray value of any gray level appearing in the n+1th set is, for example, 58 (typically, between 0 and 255) multiplied by the number of occurrences of the any gray level 58 in the set (assuming that the number of occurrences is P), and is taken as the gray level product 58×p corresponding to the any gray level, where P is a positive integer.
Referring to fig. 3, regarding gray scale compression, the n+1th set is taken as an example: the gray value of any gray level appearing in the n+1th set is, for example, 17 (typically, between 0 and 255) multiplied by the number of occurrences of the any gray level 17 in the set (assuming that the number of occurrences is V), and is taken as a gray level product 17×v corresponding to the any gray level, where V is a positive integer.
Referring to fig. 3, it is assumed that gray values other than the several gray scales of 17, 58, 199, 250, etc. in the n+1th set do not appear. It should be noted that this particular case is merely an assumption made for convenience of explanation, and that such an assumption does not generally exist in practice.
Referring to fig. 3, the following description is given by taking the n+1th set as an example: all gray scale products generated in the n+1th set are added to obtain a sum (250×k+199×q+58×p+17×v), and the sum is divided by the total number of all the second type coordinates corresponding to the first type coordinates XN, for example, M is taken as an equivalent gray scale corresponding to the first type coordinates XN. The mathematical expression can be described as (250×k+199×q+58×p+17×v) divided by M, which is the equivalent gray scale.
Referring to fig. 4, in an alternative embodiment, image area 14A has a height H2 and a width W2. This image area may be an image area 14A obtained by photographing a partial area or position of the bar code 14. The image region shows a plurality of pixels, such as pixels Y0, Y1, … … YT, on the ordinate, and each ordinate corresponds to a total of R abscissas. Gray scale compression (as indicated by the arrow) is performed for each row of pixels.
Referring to fig. 4, in an alternative embodiment, the image area 14A has a height H2 and a width W2, and if different heights H2 and widths W2 are selected, i.e. the values of H2 and W2 are adjusted, the natural number T of YT is also different and the gray distribution and gray characteristics of the image area 14A are changed. Based on the purpose of satisfying the gray value variation measurement of the overlay object on the wafer 10, it is allowed to select and adjust reasonable H2 and W2. Let H2 < W2.
Referring to fig. 4, in an alternative embodiment, the first and second types of coordinates are ordinate and abscissa, respectively. Then in the example all abscissas corresponding to the ordinate Y0 are X0, X1, … … XR. The coordinates of R pixels corresponding to the ordinate Y0 are (X0, Y0), (X1, Y0), and (X2, Y0) … … (XR, Y0), respectively, and R is a natural number.
Referring to fig. 4, in an alternative embodiment, the coordinates of the R pixels corresponding to a similar ordinate Y1 are (X0, Y1), (X1, Y1), (X2, Y1) … … (XR, Y1), respectively, as described above.
Referring to fig. 4, in an alternative embodiment, the coordinates of the R pixels corresponding to a similar ordinate Y2 are (X0, Y2), (X1, Y2), (X2, Y2) … … (XR, Y2), respectively, as described above.
Referring to fig. 4, in an alternative embodiment, the coordinates of the R pixels corresponding to the similar ordinate YT are (X0, YT), (X1, YT), (X2, YT) … … (XR, YT), respectively, as described above.
Referring to fig. 4, in an alternative embodiment, gray values of respective pixels of an image area 14A of an overlay object, such as 14 or 12, are recorded, and a set of gray values of all second-type coordinates corresponding to any first-type coordinate is extracted under any first-type coordinate of the image area 14A.
Referring to fig. 4, in an alternative embodiment, the gray values of the individual pixels of the image area 14A of the overlay object bar code 14 are recorded. A first set (lateral set) of gray values of all the second type coordinates (X0 to XR) corresponding to any one of the first type coordinates Y0 is extracted at a first type coordinate, such as Y0, of the image area 14A.
Referring to fig. 4, the first set includes pixel points (X0, Y0), (X1, Y0), … … (XR, Y0) their respective gray values, and a series of gray values for R pixel points constitute the first set.
Referring to fig. 4, the Gray values (e.g., the first set) of all the second type coordinates X0 to XR corresponding to the first type coordinate Y0 are compressed, and an equivalent Gray gray_y0 is used to represent the compressed Gray values (e.g., the first set) of all the second type coordinates corresponding to any one of the first type coordinates Y0.
Referring to fig. 4, in an alternative embodiment, the gray values of the individual pixels of the image area 14A of the overlay object bar code 14 are recorded. A second set (belonging to the lateral set) of gray values of all the second type coordinates (X0 to XR) corresponding to any one of the first type coordinates Y1 is extracted at the first type coordinates of the image area 14A, such as Y1.
Referring to fig. 4, the second set includes pixel points (X0, Y1), (X1, Y1), … … (XR, Y1) their respective gray values, and a series of gray values of R pixel points constitute the second set.
Referring to fig. 4, the Gray values (e.g., the second set) of all the second type coordinates X0 to XR corresponding to the first type coordinate Y1 are compressed, and an equivalent Gray gray_y1 is used to represent the compressed Gray values (e.g., the second set) of all the second type coordinates corresponding to any one of the first type coordinates Y1.
Referring to fig. 4, in an alternative embodiment, the gray values of the individual pixels of the image area 14A of the overlay object bar code 14 are recorded. A third set (belonging to the lateral set) of gray values of all the second type coordinates (X0 to XR) corresponding to any one of the first type coordinates Y2 is extracted at the first type coordinates, such as Y2, of the image area 14A.
Referring to fig. 4, the third set includes pixel points (X0, Y2), (X1, Y2), … … (XR, Y2) their respective gray values, and a series of gray values for R pixel points constitute the third set.
Referring to fig. 4, the Gray values (e.g., the third set) of all the second type coordinates X0 to XR corresponding to the first type coordinate Y2 are compressed, and an equivalent Gray gray_y2 is used to represent the compressed Gray values (e.g., the third set) of all the second type coordinates corresponding to any one of the first type coordinates Y2.
Referring to fig. 4, in an alternative embodiment, the gray values of the individual pixels of the image area 14A of the overlay object bar code 14 are recorded. One t+1st set (belonging to the horizontal set) of gray values of all the second type coordinates (X0 to XR) corresponding to any one of the first type coordinates YT is extracted under the first type coordinates such as YT of the image area 14A.
Referring to fig. 4, the t+1 th set contains pixels (X0, YT), (X1, YT), … … (XR, YT) with their respective gray values, and a series of gray values for R pixels forms a t+1 th set.
Referring to fig. 4, gray values (for example, a t+1th set) of all second type coordinates X0 to XR corresponding to the first type coordinates YT are compressed, and an equivalent Gray gray_yt is used to represent the compressed Gray values (for example, the t+1th set) of all second type coordinates corresponding to any first type coordinate YT.
Referring to fig. 4, the gray_y2 corresponding to the gray_y0 corresponding to the Y0, the gray_y1 corresponding to the Y1, the gray_y2 corresponding to the Y2, and the gray_y3 corresponding to the coordinate Y3, the gray_yt corresponding to the … … YT are combined into an array. One of the uses of the array is to determine the gray value change of an overlay object, such as image region 14A, and thereby determine the dynamic change of the brightness of image region 14A along the first type of coordinate direction. Namely, the measurement of the gray value change of the overlay object is realized.
Referring to fig. 4, in an alternative embodiment, the gray value of any gray level appearing in the set is multiplied by the number of the any gray level appearing in the set, and the gray value is taken as the gray level product corresponding to the any gray level; all gray scale products generated in the set are added to obtain a sum value, and the sum value is divided by the total number of all the second type coordinates corresponding to any first type coordinate to obtain an equivalent gray scale corresponding to any first type coordinate (taking a plurality of rows of pixel points as an example).
Referring to fig. 4, regarding gray scale compression, taking the t+1st set as an example: the gray value of any gray level appearing in the t+1th set is, for example, 236 (typically, between 0 and 255) multiplied by the number of the gray levels 236 appearing in the set (assuming that the number of the gray levels appearing is K), and is taken as the gray level product 236×k corresponding to the any gray level, where K is a positive integer.
Referring to fig. 4, regarding gray scale compression, taking the t+1st set as an example: the gray value of any gray level appearing in the t+1th set is, for example, 125 (typically, between 0 and 255) multiplied by the number of occurrences of the any gray level 125 in the set (assuming that the number of occurrences is Q), and is taken as the gray level product 125×q corresponding to the any gray level, where Q is a positive integer.
Referring to fig. 4, regarding gray scale compression, taking the t+1st set as an example: the gray value of any gray level appearing in the t+1th set is, for example, 98 (typically, between 0 and 255) multiplied by the number of occurrences of the any gray level 98 in the set (assuming that the number of occurrences is P), and is taken as the gray level product 98×p corresponding to the any gray level, where P is a positive integer.
Referring to fig. 4, regarding gray scale compression, taking the t+1st set as an example: the gray value of any gray level appearing in the t+1th set is, for example, 76 (typically, between 0 and 255) multiplied by the number of occurrences of the any gray level 76 in the set (assuming that the number of occurrences is V), and is taken as the gray level product 76×v corresponding to the any gray level, where V is a positive integer.
Referring to fig. 4, it is assumed that gray values other than the several gray levels of 76, 98, 125, 236, etc. in the t+1 th set do not appear. It should be noted that this particular case is merely an assumption made for convenience of explanation, and that such an assumption does not generally exist in practice.
Referring to fig. 4, the following description is given by taking the t+1st set as an example: all gray scale products generated in the t+1th set are added to obtain a sum (236×k+125×q+98×p+76×v), and the sum is divided by the total number of all the second type coordinates corresponding to the first type coordinates YT, for example, R is taken as an equivalent gray scale corresponding to the first type coordinates YT. The mathematical expression can be described as (236×k+125×q+98×p+76×v) divided by R, which is the equivalent gray scale.
Referring to fig. 2, the image area 12A includes the peripheral local positions of the overlay object, such as the bar-shaped mark 12 (e.g., the peripheral local positions on the left and right sides of the bar-shaped mark 12 in fig. 1), so that in this embodiment, the sets (gray_x0, gray_x1, gray_x3, … … gray_xn) can be synchronously introduced into the equivalent Gray levels of the overlay object and the peripheral local positions thereof, and the brightness change of the image area 12A takes the overlay object, such as the bar-shaped mark 12, as the center, and exhibits the change trend of mirror symmetry. Typically, the shading, e.g. the change curve, of the left side of the bar code 12 and the shading, e.g. the change curve, of the right side of the bar code 12 are mirror symmetrical, e.g. with respect to the bar code 12 as a centre of symmetry.
Referring to fig. 4, the image area 14A includes the peripheral local positions of the overlay object, such as the bar-shaped mark 14 (e.g., the peripheral local positions on the upper and lower sides of the bar-shaped mark 14 in fig. 1), so that in this embodiment, the sets (gray_y0, gray_y1, gray_y3, … … gray_yt) can synchronously introduce the equivalent Gray scales of the overlay object and the peripheral local positions thereof, and the brightness change of the image area 14A takes the overlay object, such as the bar-shaped mark 14, as the center, and exhibits the change trend of mirror symmetry. Typically, for example, the stripe mark 14 is centered on symmetry, and the stripe mark 14 is mirror symmetrical with respect to the upper light and shade change, e.g., the change curve, and the lower light and shade change, e.g., the change curve, of the stripe mark 14.
Referring to fig. 5, in an alternative embodiment, image area 12A is not limited to a particular area, but rather it may be free to slide over, i.e., freely select, a localized area or location of bar code 12. It is generally necessary to locate specific coordinate values of X0 to XN, Y0 to YM in image processing. If image region 12A is freely selected, a reference coordinate of image region 12A needs to be located. For example, the coordinates x_start1 and y_start1 of the upper left position are each a reference coordinate of the selection area, and then the actual coordinates of each of X0 to XN, Y0 to YM are given to the actual coordinates with the reference coordinates as reference values. The one pixel point having the abscissa x_start1 and the ordinate y_start1 is, for example, the leftmost and uppermost one pixel point of the image area 12A selected for the bar mark 12. Of course, the coordinates of the leftmost and bottommost one pixel of the image area 12A may be selected as the reference coordinates, the coordinates of the rightmost and topmost one pixel of the image area 12A may be selected as the reference coordinates, or the coordinates of the rightmost and bottommost one pixel of the image area 12A may be selected as the reference coordinates.
Referring to fig. 6, in an alternative embodiment, image area 14A is not limited to a particular area, but rather it may be free to slide over, i.e., freely select, a localized area or location of bar code 14. It is often necessary to locate specific coordinate values of X0 to XR, Y0 to YT in image processing. If image region 14A is freely selected, a reference coordinate of image region 14A needs to be located. For example, the coordinates x_start2 and y_start2 of the upper left position are each a reference coordinate of the selected area, and then the actual coordinates of each of X0 to XR, Y0 to YT are given the actual coordinates with the reference coordinates as reference values. The one pixel point having the abscissa x_start2 and the ordinate y_start2 is, for example, the leftmost and uppermost one pixel point of the image area 14A selected for the bar code 14. Of course, the coordinates of the leftmost and bottommost one pixel of the image area 14A may be selected as the reference coordinates, the coordinates of the rightmost and topmost one pixel of the image area 14A may be selected as the reference coordinates, or the coordinates of the rightmost and bottommost one pixel of the image area 14A may be selected as the reference coordinates.
Referring to fig. 7, in a modern semiconductor or integrated circuit manufacturing process, a photolithography layer is usually required to align the two previous layers, and a conventional standard overlay measurement process has poor measurement accuracy due to different measurement positions in the multilayer alignment, and under the observation of a lens, a mark often has a behavior difference at the positions of different layers, and this difference affects the result of the overlay. In addition, the conventional overlay measurement mask is designed in a one-to-one overlapping manner, when multiple layers are aligned, a plurality of masks are needed to perform overlay measurement, more dicing channel use space is occupied, and along with product diversification, increased number of layers of semiconductor products and limited dicing channel space, more space can be saved only by optimizing the mask related to the wafer 10. Direct modification of the reticle tends to affect the electrical or routing characteristics of the intended core circuitry or components on the wafer 10. The biggest problem of traditional overlay measurement is: when the process anomalies such as wafer deflection, wafer expansion or shrinkage are met, the overlay marks of the previous layer and the next layer are probably overlapped, and whether the overlay is or not and the process anomalies cannot be combined.
Referring to FIG. 7, the overlay difference between the measured current layer mark, e.g., 12/14, and the reference layer mark, e.g., 12/14 (overlay object), can be measured by conventional measurement equipment, which is taught by Wenya et al in the literature of "very large scale integrated advanced lithography theory and application". In conventional overlay metrology equipment, image analysis and specific algorithmic processing of a specialized overlay object is typically involved to evaluate overlay errors, which is relatively time consuming and laborious, and results are not necessarily accurate. The most troublesome problem is that the detection of overlay differences exhibits near zero errors, but in essence the overlay object truly suffers from certain drawbacks such as assuming that the marks in the reference layer or the previous or first layer, e.g. 12/14, are standard, the marks in the latter layer, e.g. 12/14, have a process problem such as deflection (Rotation) relative to the wafer, however the problem is that: note that at this point the center of the logo in the next layer or the current layer or the second layer, such as 12/14, is unchanged, and the center of the logo of the previous layer is nearly perfectly aligned with the center of the logo of the next layer. But figure 7 shows that e.g. 12/14 can be rotated around its own centre. In this case, the conventional overlay measurement apparatus may consider the overlay object normal and ignore the process anomaly. It is therefore necessary to provide a new measurement alternative to the conventional overlay measurement, which should satisfy the following functions: the measurement process is simplified, the result can be rapidly and accurately given, and whether the production process which is important for the overlay difference is abnormal or not can be judged, and the measurement result is interfered.
Referring to fig. 7, the foregoing describes the process problem of wafer deflection dependence, the whole wafer shows the result of overlay differential deflection while the current layer mark (e.g., 12/14) is deflected by one piece about the center of the wafer relative to the previous layer mark (e.g., 12/14) but at this point the center of the mark such as 12/14 and the center of the mark such as 12/14 are still allowed to coincide. In this case, the conventional overlay measurement apparatus may consider the overlay object normal and ignore the process anomaly, but there is an overlay difference. In the frame-in-frame mode, for example, the outer and inner markers of fig. 7 are rotated or twisted relative to each other.
Referring to fig. 7, again as an additional wafer expansion or contraction process problem, the entire wafer shows the overlay difference measurement results and the current layer (mark such as 12/14) is expanded or contracted with respect to the previous layer (mark such as 12/14) by the wafer center by an amount that allows the center of the previous layer mark and the center of the next layer mark to be coincident at this time, but the overlay difference is true. In the in-box mode, for example, the outer box mark or the inner box mark expands or contracts.
Referring to fig. 7, for the wafer offset (WAFER SHIFT), which is an example of the overlay offset, the current layer is offset to some extent in the X/Y direction with respect to the previous layer, and the overlay difference is the same in the X/Y direction of each exposure unit. For wafer expansion or contraction (Wafer Magnification), the entire wafer shows the result of the overlay expansion or contraction, with the current layer being expanded or contracted as a whole relative to the previous layer as seen from the entire wafer. In addition, for Wafer deflection related anomalies (Wafer Rotation), the entire Wafer shows the result of overlay differential deflection with the current layer having an overall deflection with respect to the previous layer centered on the Wafer. For example, the application may be configured to adaptively infer overlay differences for various possibilities by metrology. The method and the device avoid the problem that the conventional scheme cannot identify the process problems such as wafer expansion or contraction or the overlay difference caused by the process problems such as wafer deflection. Note that only one of the next-layer marks and the previous-layer marks may have the aforementioned process anomaly, but both of the next-layer marks and the previous-layer marks may have the aforementioned process anomaly and the sizes or degrees of occurrence may be inconsistent.
Referring to fig. 7, further, whether the measured overlay difference between the current layer, e.g., the inner box, and the reference layer, e.g., the outer box, meets the prescribed specification should be determined whether the integrated circuit process associated with the overlay is normal. The previous description that the current layer has a deflection of the next layer marks such as 12/14 and the reference layer marks such as 12/14, is not practical in this case, whether or not the overlay object meets the predetermined specification. For another example, the current layer has been expanded or scaled relative to the reference layer, and the overlay object is not practical whether or not it meets the predetermined specification, because the integrated circuit process associated with the overlay has generated abnormal, hidden process deviations.
Referring to fig. 7, if the anomaly of the overlay difference is related to the process, it is obvious that the simple measurement of the conventional overlay measurement apparatus is not even about the correlation, and it is not even clear that the requirement of determining whether the integrated circuit process related to the overlay is normal, and it is not significant whether the overlay object meets the specification.
Referring to fig. 7, in an alternative embodiment, the equivalent grayscales corresponding to all the first type coordinates of the image area 12A are combined into an array, and the array 1 (gray_x0, gray_x1, gray_x3, … … gray_xn) is used to determine the Gray level change of the overlay object and determine the bright-dark dynamic change of the image area 12A along the first type coordinate direction: the change in brightness of the image area is required to exhibit a mirror-symmetrical trend centered on the overlay object such as the bar code 12. If the condition is not satisfied, the wafer 10 is considered to be abnormal at the current process stage of preparing the overlay object, and conversely, if the condition is satisfied, the wafer 10 is considered to be not abnormal at the current process stage of preparing the overlay object.
Referring to fig. 7, in an alternative embodiment, the equivalent grayscales corresponding to all the first type coordinates of the image area 14A are combined into an array, and the array 2 (gray_y0, gray_y1, gray_y3, … … gray_yt) is used to determine the Gray level change of the overlay object and determine the bright-dark dynamic change of the image area 14A along the first type coordinates: the change in brightness of the image area is required to exhibit a mirror-symmetrical trend centered on the overlay object such as the bar code 14. If the condition is not satisfied, the wafer 10 is considered to be abnormal at the current process stage of preparing the overlay object, and conversely, if the condition is satisfied, the wafer 10 is considered to be not abnormal at the current process stage of preparing the overlay object.
Referring to fig. 5, in an alternative embodiment, gray values of a row of pixels corresponding to any first type coordinate (e.g., abscissa) are compressed in a longitudinal direction (e.g., a direction indicated by an arrow), each gray value corresponding to the row of pixels is compressed to obtain an equivalent gray corresponding to the any first type coordinate, and the compressed single equivalent gray is used to represent the brightness of the row of pixels. Specific examples are: any first-class coordinate XN corresponds to a row of pixel points, and the coordinates of the row of pixel points are (XN, Y0), (XN, Y1), … … (XN, YM), the Gray values of the row of pixel points are compressed in the longitudinal direction as indicated by the arrow, each Gray value corresponding to the row of pixel points is compressed to obtain the equivalent Gray corresponding to any first-class coordinate, that is, the XN is gray_xn, and the compressed single equivalent Gray is used to represent the brightness of the row of pixel points (XN, Y0), (XN, Y1), … … (XN, YM).
Referring to fig. 6, in an alternative embodiment, gray values of a row of pixel points corresponding to any first type coordinate (such as an ordinate) are compressed in a lateral direction (such as a direction indicated by an arrow), each gray value corresponding to the row of pixel points is compressed to obtain an equivalent gray corresponding to the any first type coordinate, and the compressed single equivalent gray is used to represent the brightness of the row of pixel points. Specific examples are: the coordinates of the row of pixels corresponding to any first type coordinate YT are (X0, YT), (X1, YT), … … (XR, YT), the Gray values of the row of pixels are compressed in the transverse direction, for example, in the arrow direction, and after the Gray values corresponding to the row of pixels are compressed, the equivalent Gray corresponding to any first type coordinate, that is, the foregoing YT, is gray_yt, and the brightness of the row of pixels (X0, YT), (X1, YT), … … (XR, YT) can be represented by using the compressed single equivalent Gray.
Referring to fig. 8, in an alternative embodiment, gray values of all the second type coordinates corresponding to any first type coordinate are compressed, and the compressed gray values of all the second type coordinates corresponding to any first type coordinate are represented by using equivalent gray. The compression mode is as follows: the sets (g_x0y0, g_x0y1, g_x0y2, … … g_x0ym) are the gray values of all the second type coordinates Y0-YM corresponding to the first type coordinate X0, respectively, and the gray value of any gray level appearing in the set, for example, g_x0ym multiplied by the number of the any gray level appearing in the set, for example, K, is used as the gray level product g_x0ym corresponding to the any gray level; all Gray scale products generated in the set are added to obtain a sum value, and the sum value is divided by the total number of all the second type coordinates corresponding to any first type coordinate, for example, M, to be used as an equivalent Gray scale Gray_X0 corresponding to any first type coordinate. The squares of fig. 8 are assumed to be a series of compressed values corresponding to the first type coordinates of the image area 12A, each square corresponds to a row of pixels (a row of dashed lines in fig. 8 represents a row of pixels and each row of pixels has a series of gray values), each square refers to an equivalent gray of a row of pixels corresponding thereto, and the greater the equivalent gray, the greater the height of the square.
Referring to fig. 8, overlay objects such as bar marks 12/14, etc. are used as overlay structures for which image processing typically involves, for example, feature measurement and overlay difference determination of the wafer 10. In the prior art, the data comparison of the graphic overlay data of the overlay object and the layout data can be performed, for example, the steps of: after the graph on the surface of the wafer is scanned, the gray value of the graph is obtained through calculation, the graph gray value and layout data are fitted, fitting coefficients are obtained according to an iterative algorithm of the transformation coefficients, gray transformation is carried out according to the fitting coefficients, and finally, the gray image is the extracted contour map. The method can be used for analyzing and detecting the accuracy of the upper and lower lamination pairs, and can be used for timely and properly adjusting in the compensation and correction process, reducing the process design time and improving various errors of the process design. This is one of the application directions of fig. 8. Although the prior art can analyze the gray level of the overlay object and perform the image contour analysis, the main disadvantage of the prior art is that the gray level of the overlay object cannot be correlated with whether the manufacturing process of the overlay object has an abnormality. One of the obvious advantages of the implementation of fig. 8 is therefore: in addition to analyzing and detecting the accuracy of the overlay pair on the wafer by using the gray level of the overlay object, whether the process of preparing the overlay object is abnormal or not can also be analyzed according to the gray level of the overlay object (if the process is abnormal, measures such as actively adjusting the preparation process means of the overlay object or changing related process parameters or attempting to use different structure types of the overlay object or using a suitable image filter for the gray level value of the overlay object are generally required). Important preconditions for implementing fig. 8 are: and combining equivalent gray scales corresponding to all the first type coordinates of the image area into an array, wherein the array is used for judging gray scale value changes of the overlay object and determining brightness dynamic changes of the image area along the first type coordinates. One of the important uses of noting the dynamic change of the brightness of an image area is to analyze whether the manufacturing process of an overlay object is abnormal.
Referring to fig. 8, in an alternative embodiment, the compression value may well reflect the brightness change of the image area in an ideal case, for example, the compression value may refract the brightness change trend of the image area and show whether the current process stage of preparing the overlay object has a process abnormality. Abnormal processes (such as wafer expansion or contraction) cause the effect that the compression value would otherwise reflect the actual brightness variation trend of the image area to be destroyed, for example, there is a certain resolution degradation in the process of compressing a row of gray values into an equivalent gray, but this is a compromise for obtaining brightness variation. If the abnormal process acts on the overlay object to deform it even slightly, the gray level of the pixel will be distorted, and it is difficult to distinguish whether the distortion is due to improper compression or abnormal process, or both. The volume changes of two neighboring small squares shown in fig. 8 are sometimes relatively large in magnitude, and the inducing factors of abrupt changes that are more noticeable in gray scale of these neighboring pixels can sometimes be attributed to gray scale distortion.
Referring to fig. 8, in an alternative embodiment, the determination mechanism of whether the brightness change of the image area is a dynamic change trend that is mirror symmetrical with respect to the overlay object as the center may be based on gray level distortion, so that both cases of yes or no result are not suitable as a real reason for determining whether the overlay object preparation process is abnormal.
Referring to fig. 8, according to the trend of the semiconductor technology, with the advance of the technology node (from hundreds of nanometers earlier to several nanometers at present, for example, 3-5 nanometers), the alignment deviation allowed by the critical lithography layer, i.e., the overlay difference, is scaled down year by year, so that how to handle the relationship between the abnormal overlay process and the gray scale value is one of the core problems to be solved under the condition of smaller technology node. The foregoing technical solutions partially solve such problems, so that the overlay gray scale measurement can cope with challenges brought by the development trend of the international semiconductor technology roadmap (ITRS). However, with the advancement of smaller technology nodes, there is still a need to reduce the superposition of manufacturing process factors and unreasonable compression of overlay objects, which is an improvement of the compression of gray values mentioned below, for example, as described in connection with fig. 9, in order to reduce distortion.
Referring to fig. 9, it is noted that the effect of asymmetry or symmetry of the compression value, which is originally capable of reflecting the change of brightness, is destroyed by the abnormal process (such as wafer expansion or contraction), and the process abnormality becomes hidden and cannot be analyzed from the compression value or array, so that the compression method needs to be changed.
Referring to fig. 8, the dotted line area COM1 clearly shows that the small square has a large abrupt change and mutation.
Referring to fig. 8, the context has shown that the process node of the wafer 10 goes from micron-scale to nano-scale, and even in the case where the critical dimension is only a few nanometers (e.g., 3-5 nanometers), the volume change of the small square is very easy to fluctuate, and the nano-scale process is further superimposed with gray-scale distortion, so that the discrimination of the dynamic change of the brightness of the image area becomes error-prone. In other words, there is still room for improvement in deriving the true dynamic change of the brightness of the image region from the illustrated scheme. However, compared with the traditional scheme, the illustrated scheme has the advantages of simple calculation, accurate image analysis, clear brightness and dynamic state and the like.
Referring to fig. 9, for depth improvement measure of compression method: multiplying the gray value of any gray level appearing in the set by the number of the gray levels appearing in the set, and taking the gray value as a gray level product corresponding to the any gray level; all gray scale products generated in the sets are added, and the addition result of the gray scale products is added with the maximum gray scale product in another set corresponding to the next first type coordinate adjacent to the first type coordinate to obtain a sum value, and the sum value is divided by the total number of all second type coordinates corresponding to the first type coordinate to be used as an equivalent gray scale corresponding to the first type coordinate.
Referring to fig. 9, the dotted line area COM2 clearly shows that the small square changes more smoothly and the degree of mutation is small.
Referring to fig. 9, in an alternative embodiment, in conjunction with fig. 3, knowing the set ds_x2 of gray values of all the second type coordinates Y0-YM corresponding to any of the first type coordinates X2, it is obvious that the set ds_x2 is actually a set of gray values of the pixel points (X2, Y0), (X2, Y1), (X2, Y2) … … (X2, YM). These Gray values are compressed to obtain gray_x2.
Referring to fig. 9, in an alternative embodiment, in conjunction with fig. 3, knowing the set ds_x3 of gray values of all the second type coordinates Y0-YM corresponding to any of the first type coordinates X3, it is obvious that the set ds_x3 is actually a set of gray values of the pixel points (X3, Y0), (X3, Y1), (X3, Y2) … … (X3, YM). These Gray values are compressed to obtain gray_x3.
Referring to fig. 9, in an alternative embodiment, in conjunction with fig. 3, the equivalent grays gray_x0 through gray_xn corresponding to the first type coordinates X0 through XN of the image area 12A are combined into an array AR.
Referring to fig. 9, in an alternative embodiment, in conjunction with fig. 3, the gray values of the pixels of the image area 12A of the overlay object are recorded, and the set ds_x2 of the gray values of all the second type coordinates Y0-YM corresponding to any first type coordinate X2 is extracted under any first type coordinate X2 of the image area 12A; compressing Gray values of all second-class coordinates corresponding to any first-class coordinate X2, and representing the compressed Gray values of all second-class coordinates corresponding to any first-class coordinate X2 by using equivalent gray_X2; the equivalent Gray scales gray_x0 to gray_xn corresponding to all the first type coordinates X0 to XN of the image area 12A are combined into an array AR for determining the Gray scale value change of the overlay object 12 and the like and thereby determining the brightness dynamic change of the image area 12A along the first type coordinates. Multiplying the gray value of any gray level appearing in the set DS_X2 by the number of the any gray level appearing in the set, and taking the gray value as a gray level product corresponding to the any gray level; all the Gray-scale products generated in the set ds_x2 are added, and the result of the addition is added to the largest Gray-scale product in another set ds_x3 corresponding to the next first-type coordinate X3 adjacent to the first-type coordinate X2 to obtain a sum value, which is divided by the total number of all the second-type coordinates corresponding to the first-type coordinate X2, for example, M, as an equivalent Gray-scale gray_x2 corresponding to the first-type coordinate X2.
Referring to fig. 9, in an alternative embodiment, the maximum gray-scale product in another set ds_x3 corresponding to the next following first-class coordinate X3 of the first-class coordinate X2 is described above in conjunction with fig. 3: the gray value of any gray level appearing in the set ds_x3, such as 233, is multiplied by the number of the gray levels appearing in the set ds_x3, such as K, of the any gray level, such as 233, and then is used as the gray level product 233×k corresponding to the any gray level, such as 233. Similarly, the gray value 199 of any gray level appearing in the set ds_x3 is multiplied by the number Q of the gray levels 199 appearing in the set ds_x3, and then the gray value 199 is used as the gray product 199×q corresponding to the gray level 199. Similarly, the gray value of any gray level appearing in the set ds_x3, such as 58, multiplied by the number of the gray levels appearing in the set ds_x3, such as P, of the any gray level, such as 58, is regarded as the gray level product corresponding to the any gray level, such as 58, i.e., 58×p. Similarly, the gray value of any gray level appearing in the set ds_x3 is multiplied by the number of the gray levels appearing in the set ds_x3, such as 17, such as V, for example, and then the gray value is taken as the gray level product corresponding to the gray level, such as 17, i.e., 17×v. Assuming that 58×p > 199×q > 233×k > 17×v, it is obvious that the largest one gray-scale product in the set ds_x3 is 58×p.
Referring to fig. 9, in an alternative embodiment, it is assumed that gray values other than the several gray levels of 17, 58, 199, 233, etc. in the set ds_x3 do not appear. It is noted that this particular case is merely a hypothetical case made for convenience of explanation, and such hypothetical case does not generally exist in practice.
Referring to fig. 9, in an alternative embodiment, it is known that the volume changes of two neighboring small squares shown in fig. 8 are sometimes larger in magnitude, the small squares represent equivalent gray scales after compression, and the inducing factors of abrupt changes that are more obvious in gray scales of these neighboring pixels can sometimes be attributed to gray scale distortion. The volume change of the adjacent two dice shown in fig. 9 is of relatively small magnitude most of the time, which more closely conforms to the true image of the wafer 10 and masks most of the gray scale distortion. In practice, fig. 8 and 9 share the image area 12A, but their compression methods are slightly different but the compression effects for judging whether the overlay object preparation process is abnormal are quite different. Note that abrupt changes in the volume of the blocks of fig. 8 often result in large errors in the dynamic changes in shading exhibited by the array. And the scheme of fig. 8 is computationally simple and the compression process for each column of pixels does not have to take into account the pixel effects of the other columns of the perimeter, but the scheme of fig. 9 is computationally complex and the compression process for each column of pixels needs to take into account the pixel coupling of the other columns of the perimeter. Or the scheme of fig. 8 is computationally simple and the compression process of each row of pixels does not have to take into account the pixel effect of the other rows of the perimeter, but the scheme of fig. 9 is computationally complex and the compression process of each row of pixels needs to take into account the pixel coupling of the other rows of the perimeter.
Referring to fig. 9, unlike the compression of fig. 8, the compression process of each column (or each row) of pixels is required to correlate and couple gray values with the adjacent other columns (or other rows) of pixels, forcing the gray compression curve to locally bend according to the real image of the wafer, and the dynamic change of brightness and darkness of the image area along the first type of coordinate direction is slightly smooth. The correction of the brightness variation of the image area can easily infer the brightness dynamic variation trend of the image area distributed on both sides of the overlay object, so that it is easier to obtain whether the brightness variation of the image area is a dynamic variation trend which takes the overlay object as the center and shows mirror symmetry, and it is easier to determine whether the wafer 10 is abnormal in the current process stage of preparing the overlay object 12/14.
Referring to fig. 9, the above example of photolithography demonstrates that overlay difference detection is important. Overlay variations may provide a basis for evaluating and controlling the patterning process in integrated circuit photomask fabrication and lithography processes, which are intended to provide highly accurate overlay measurements. The overlay object includes a critical dimension structure and a specific mark, and an alignment structure.
Referring to fig. 9, as integrated circuit devices (e.g., logic chips and memory chips, etc.) progress toward smaller nanoscale dimensions, characterization of overlay differences becomes more difficult. Components with complex three-dimensional geometries and materials with disparate physical properties (e.g., 3D storage) can exacerbate the difficulty of overlay characterization. For example, modern memory structures are typically high-depth wide three-dimensional structures, which makes overlay variations and their critical processes more sensitive. Optical metrology tools utilizing infrared to visible light can penetrate multiple layers of translucent materials, and longer wavelengths providing good penetration depths can provide adequate sensitivity to various related process anomalies. Complex integrated circuit devices, such as finfets, require an increasing number of process parameters to be correlated with overlay differential measurements. Figure 9 just accommodates this requirement.
Referring to fig. 9, one of the drawbacks of the overlay difference measurement in the prior art is: irrespective of whether the process associated with the overlay difference is normal, and focusing on the metrology itself substantially only, the overlay difference in this case (which may or may not be within specification) may be a product of an abnormal process condition, and it is not known whether the specification or lack of specification of the presence of the overlay difference has an association with the process associated therewith. Whether the currently inferred overlay difference of the evaluated wafer is mismatched with the true error which the overlay difference should be matched is judged, so that the problem is solved, and a reasonable abnormal process rejection and identification mechanism is allowed to be given to the generation factors of the overlay difference. The application can be configured to adaptively infer various possible overlay differences by measurement. The method and the device avoid the overlay difference caused by the process problem that the wafer is expanded or contracted or the process problem of wafer deflection which cannot be identified by the traditional scheme.
Referring to fig. 9, comparing fig. 8, it can be seen that the gray scale compression scheme of the overlay object is not unique, and different gray scale compression has different characteristics and different beneficial effects.
Referring to fig. 7, an image area of an overlay object is recorded, and image data is obtained in such a manner that: optical imaging, SEM scanning electron microscope, X-ray imaging, spectroscopic ellipsometer, reflectometer, etc. Measurement of overlay involves a computer or server or processor unit that can run a computer program, other alternatives to the processor unit: a field programmable gate array, a complex programmable logic device or a field programmable analog gate array, or a semi-custom ASIC or processor or microprocessor, a digital signal processor or integrated circuit or GPU, a software firmware program stored in memory, or the like.
Referring to fig. 7, a system for measuring gray value variation of a wafer alignment object includes: the image shooting device is used for shooting and extracting the image area of the overlay object. The image capturing device may include, but is not limited to: optical imaging, SEM scanning electron microscope, X-ray imaging, spectroscopic ellipsometer, reflectometer, etc.
Referring to fig. 7, a system for measuring gray value variation of a wafer alignment object includes: and it receives image data based on the image area transmitted by the photographing apparatus.
Referring to fig. 7, a computer operable for overlay measurement includes, but is not limited to: a server, a personal computer system or mainframe computer system, a computer workstation, an image computer, a parallel processor, or any other arbitrary device known in overlay discrepancy measurement may be operated. In general, the term computer system is defined broadly to encompass any device having one or more processors, which execute instructions from a memory medium. The associated computer program for the overlay differential measurement process may be stored on any computer readable medium, such as a memory. Exemplary computer readable media also include read-only memory or random-access memory or magnetic or optical disks or tape, and the like.
Referring to fig. 7, in an alternative embodiment, the overlay object referred to by the wafer includes critical dimension structures such as bottom critical dimension structures or middle critical dimension structures or top critical dimension structures or gratings, etc., in any two or more of the front and back layers that need to be aligned. Overlay differences involved with wafers include displacement between any two or more structures in the front and back layers, and alignment displacement between grating structures.
Referring to fig. 7, the computer records the gray scale of each pixel point of the image area, and captures the set of gray scale values of all the second type coordinates corresponding to any first type coordinate of the image area. Compressing gray values of all second-class coordinates corresponding to any first-class coordinate by a computer, and representing the compressed gray values of all second-class coordinates corresponding to any first-class coordinate by using an equivalent gray; and combining equivalent gray scales corresponding to all the first type coordinates of the image area into an array, wherein the array is used for judging gray scale value changes of the overlay object and determining brightness dynamic changes of the image area along the first type coordinates.
Referring to fig. 7, the computer records the gray scale of each pixel point of the image area, and captures the set of gray scale values of all the second type coordinates corresponding to any first type coordinate of the image area. And compressing gray values of all the second type coordinates corresponding to any first type coordinate by the computer. The computer performs a compression operation: the gray value of any gray level appearing in the set is multiplied by the number (or times) of the gray levels appearing in the set, and the gray value is used as the gray level product corresponding to the gray level; all gray scale products generated in the set are added first, and the addition result of the gray scale products is added with the maximum gray scale product in another set corresponding to the next first type coordinate adjacent to the first type coordinate, so that a sum value is obtained, and the sum value is divided by the total number of all second type coordinates corresponding to the first type coordinate and is used as an equivalent gray scale corresponding to the first type coordinate.
Referring to fig. 7, in an alternative embodiment, for overlay gray scale measurements, the method of integrated circuit overlay object gray scale measurement or the evaluation function of gray scale variation may be implemented in hardware or software or firmware or any combination thereof. If implemented in software, the corresponding computing methods and functions may be stored on or run as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media and communication modules including any medium that can transfer a computer program from one place to another. Storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of alternative example, and not limitation, such computer-readable media may comprise: RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage, optical storage, or any other medium that can be used to carry or store corresponding program code in the form of instructions or data structures and that can be accessed by a general purpose or special purpose computer or a general purpose or special purpose processor. Any connection means may also be properly viewed as an accessory to the computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable or fiber optic cable, twisted pair, digital subscriber line, or wireless technology (infrared, radio, and microwave), then the coaxial cable, fiber optic cable, twisted pair, digital line, or wireless technology is included in the definition of the appendage. Magnetic disks and optical disks as used herein include: a general optical disc, a laser optical disc, or an optical disc or a digital versatile disc, a floppy disk and a blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The foregoing description and drawings set forth exemplary embodiments of the specific structure of the embodiments, and the above disclosure presents presently preferred embodiments, but is not intended to be limiting. Various alterations and modifications will no doubt become apparent to those skilled in the art after having read the above description. It is therefore intended that the appended claims be interpreted as covering all alterations and modifications as fall within the true spirit and scope of the invention. Any and all equivalent ranges and contents within the scope of the claims should be considered to be within the intent and scope of the present invention.