Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In order to flexibly detect the integrity of text data and improve the universality of a detection method, the embodiment of the application provides a text data integrity detection scheme, which comprises the steps of firstly carrying out instance segmentation processing on a target image and determining a target image area of a target instance in the target image; and finally, if the target original text box area partially overlapped with the target image area exists, determining that the text data of the target image area is incomplete.
In one embodiment, the above text data integrity detection scheme may be performed by a terminal device, where the terminal device may include any one or more of a smart phone, a tablet computer, a notebook computer, a desktop computer, an intelligent vehicle, and an intelligent wearable device, and the terminal device may also be a stand-alone server, a cloud server, a server cluster, or a distributed system, which is not limited herein. The terminal equipment can take an image shot by a user as a target image or select one image from an image database in the terminal equipment as the target image, then the terminal equipment performs instance segmentation processing on the target image to determine a target image area of a target instance in the target image, the terminal equipment performs text detection processing on the target image area to obtain at least one original text box, determines original text box areas of all the original text boxes in the target image, and finally if a target original text box area partially overlapped with the target image area exists, the terminal equipment determines that text data of the target image area is incomplete.
Based on the above text data integrity detection scheme, the embodiment of the application provides a text data integrity detection system. Referring to fig. 1, a schematic structural diagram of a text data integrity detection system according to an embodiment of the present application is provided. The integrity detection system of text data shown in fig. 1 may include a terminal device 101 and a server 102. The terminal device 101 may include any one or more of a smart phone, a tablet computer, a notebook computer, a desktop computer, an intelligent vehicle, and an intelligent wearable device. The server 102 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content delivery network (Content Delivery Network, CDN), basic cloud computing services such as big data and an artificial intelligent platform. The terminal device 101 and the server 102 may be directly or indirectly connected through wired or wireless communication, which is not limited herein.
In one embodiment, the user of the terminal device 101 may take a photographed image as a target image, or select an image from an image database in the terminal device as a target image, then upload the target image to the server 102, then the server 102 performs an instance segmentation process on the target image to determine a target image area of the target instance in the target image, then perform a text detection process on the target image area to obtain at least one original text box, and determine an original text box area of each original text box in the target image, finally the server 102 determines whether there is a target original text box area partially overlapped with the target image area, if there is a target original text box area partially overlapped with the target image area, the server 102 determines that text data of the target image area is incomplete, and may send a prompt message about the incomplete text data to the terminal device 101 to prompt the user of the terminal device 101 to re-upload the target image, and if there is no target original text box area partially overlapped with the target image area, the server 102 determines that text data of the target image area is complete, and may send a prompt message about the uploading of the target image area to the terminal device 101 to complete successfully.
Based on the text data integrity detection scheme and the text data integrity detection system, the embodiment of the application provides a text data integrity detection method. Referring to fig. 2, a flow chart of a text data integrity detection method according to an embodiment of the present application is shown. The integrity detection method of text data shown in fig. 2 may be performed by the server or the terminal device shown in fig. 1. The text data integrity detection method shown in fig. 2 may include the steps of:
S201, performing instance segmentation processing on the target image, and determining a target image area of the target instance in the target image.
In the embodiment of the application, the target image may be a photographed image or an image selected from a plurality of images in an image database. The manner in which the terminal device or the server selects the image may be, but not limited to, that the user selects one or more images from among the plurality of images as the target image, or that the terminal device or the server arbitrarily selects one or more images from among the plurality of images as the target image.
In addition, the target image area may be an image area of an object identified in the target image, or may be an image area of an object identified as a preset type in the target image. The preset type may be set by a user or a system, and is not limited herein.
For example, the insurance user a needs to reimburse the insurance, so that an image B is shot on the invoice of the hospital through the mobile phone, and besides the invoice, some other notes are shot in the image B due to the messy shooting background. Then, the insurance user a uploads the image B as a target image to the insurance reimbursement platform, and at this time, the insurance reimbursement platform performs instance segmentation processing on the image B, and because the insurance reimbursement platform mainly performs insurance reimbursement, it needs to detect whether text data in an invoice for reimbursement is complete, so that the insurance reimbursement platform takes an image area identified as the invoice as a target image area, and other image areas of tickets not identified as the invoice are not taken as target image areas.
In the embodiment of the present application, the method for performing the instance segmentation processing on the target image may be performing the instance segmentation processing on the target image by using a trained instance segmentation model. The example segmentation model may be a deep learning model such as Mask R-CNN (Mask Region CNN, mask Region convolutional neural network), fast R-CNN (Fast Region convolutional neural network, detection speed is faster than that of the Region convolutional neural network), YOLACT (model of real-time example segmentation published in ICCV in 2019, mainly by two parallel convolutional neural sub-networks), and the like, and different models may be flexibly selected based on different requirements, which is not limited herein. The training process of the instance segmentation model is a technical means familiar to those skilled in the art, and is not described herein.
In a specific implementation, a plurality of target images can be used as samples to be input into an instance segmentation model Mask R-CNN, then the instance segmentation model Mask R-CNN carries out instance segmentation processing on the plurality of target images, and finally the accuracy of the instance segmentation model Mask R-CNN is evaluated through an instance segmentation result. Wherein each target image may contain 0,1 or more invoices or bills. The final example segmentation results are shown in tables 1 and 2, in which table 1 is used to indicate the number of possible invoices in each target image, and table 2 is used to indicate the number of possible bills in each target image.
TABLE 1
TABLE 2
The accuracy refers to the proportion of the sample which is divided into the positive examples and is actually the positive examples, for example, 100 target images which are divided into 1 invoice are included, but only 80 target images which are actually included in 1 invoice are included in 100 target images, the accuracy is 80% if there are no or more than 20 target images, the recall rate refers to the proportion of the sample which is predicted to be the positive examples and is actually the positive examples to the total sample which is actually the positive examples, for example, 90 target images which are divided into 1 invoice are included in 90 target images which are actually included in 1 invoice are included in 85 target images which are actually included in 1 invoice, and the recall rate is 85% if the number of target images which are included in 1 invoice in the input sample is 100. The F1 score is a measure of classification problems, is the harmonic mean of accuracy and recall, and is 1 at the maximum and 0 at the minimum, and the calculation formula of the F1 score can be (2 x recall x precision)/(recall + precision).
In addition, macro-averaging refers to averaging after adding the F1 scores of each category, and weighted averaging is an improvement over macro-averaging in that the ratio of the number of samples of each category to the total samples is considered in the calculation process, and then the average is obtained by taking the ratio as the weight of the F1 score of each category when adding. Specifically, macro-averaging and weighted-averaging are indicators for measuring the quality of the classification result, which are technical means familiar to those skilled in the art and are not described herein.
As can be seen from the example segmentation results shown in tables 1 and 2, the example segmentation model Mask R-CNN is used for determining the target image area of the invoice or the bill in the target image, so that the misjudgment rate of the target image area of the target image is very low, namely the accuracy is high, the text data of the target image area is conveniently detected subsequently, and the robustness of the whole text data integrity detection method can be effectively enhanced.
In one embodiment, the method for performing instance segmentation processing on the target image and determining the target image area of the target instance in the target image may be 1) performing instance segmentation processing on the target image and determining the image area of each instance in the target image in at least one instance, 2) acquiring the position information of each image area in the target image, and 3) taking each image area as the target image area if it is determined that each image area is located in the central area of the target image based on the position information of each image area.
Specifically, in general, when a user shoots and uploads a bill such as an invoice or a bill, the bill such as an invoice or a bill to be checked is shot as much as possible in the middle of an image, and therefore, it can be determined whether or not an image area obtained by performing an instance division process on a target image is an image area of text data that the user wants to check by determining whether or not the image area belongs to the center area of the target image.
For example, referring to fig. 3, fig. 3 shows a schematic diagram of a target image area, where the target image 301 is subjected to an instance segmentation process, and two image areas, namely an image area 302 and an image area 303, of the target image 301 can be determined, where a central area 304 of the target image 301 can be set to be 1/4~3/4 of the length and 1/4~3/4 of the width of the target image 301. Then, the position information of the image area 302 and the image area 303 in the target image 301 can be acquired, the center point of the image area 302 can be determined as the center point 306, the center point of the image area 303 as the center point 305, and the center point 306 can be determined to fall into the center area 304 without the center point 305 falling into the center area 304 by calculating the position information of the image area 302 and the position information of the image area 303, respectively, by comparing the positions of the center point 306, the center point 305, and the center area 304, as shown in fig. 3, and thus, the image area 302 can be determined to be the target image area.
Alternatively, whether each image area is located in the center area of the target image may also be determined by determining the size of the area where each image area overlaps the center area, and if the area size of the overlapping area is greater than the preset area size, determining that the image area is located in the center area, and if not, determining that the image area is not located in the center area.
Optionally, the position of the center point of the target image and the position of the center point of each image area are obtained, then the center points of the target image and the distances between the center points of each image area are compared, and if the distance between the center point of any image area and the center point of the target image is smaller than a third preset distance threshold value, the any image area is determined to be the target image area. Alternatively, the third preset distance threshold may be a length, or may be the number of pixels, which is not limited herein. The third preset distance threshold may be, for example, 0.8mm, 0.9 to 1.1cm, 2 pixels wide, or 2 to 4 pixels, and the like, which is not limited herein. Alternatively, it may be determined by other means whether each image area is located in the center area of the target image, which is not limited herein.
In one embodiment, the method for carrying out instance segmentation processing on the target image and determining the target image area of the target instance in the target image can be 1) carrying out instance segmentation processing on the target image and determining the image area of each instance in the target image in at least one instance, 2) determining a first image area in the determined at least one image area, wherein the area size of the first image area is larger than the area size of each second image area, the second image area is an image area except the first image area in the at least one image area, 3) obtaining the ratio of the area size of each second image area to the area size of the first image area, and 4) taking the second image area and the first image area with the ratio larger than a second preset ratio threshold value as the target image area. Optionally, the second preset ratio threshold may be a fraction, a percentage, or the like, such as 1/2, 80%, 50% -80%, or the like.
For example, referring to fig. 4, fig. 4 shows a schematic diagram of another target image area, a second preset scale threshold is set to 1/2, and the target image 401 is input into an instance segmentation model Mask R-CNN for instance segmentation processing, so that it can be determined that the image areas (i.e. Mask areas) of the four instances in the target image are respectively an image area 402, an image area 403, an image area 404 and an image area 405. By calculating the area of the image area 402, the image area 403, the image area 404, and the image area 405 in the target image 401, it is possible to determine that the area of the image area 402 is 640000 pixels, the area of the image area 403 is 450000 pixels, the area of the image area 404 is 80000 pixels, and the area of the image area 405 is 38000 pixels, and thus it is possible to determine that the image area 402 is a first image area, and the image area 403, the image area 404, and the image area 405 are a second image area. Since the ratio of image area 403 to image area 402 is greater than 1/2 and the ratio of image area 404 and image area 405 to image area 402 is less than 1/2, the target image area can be determined to be image area 402 and image area 403.
Alternatively, the area size may be, in addition to the area of the area, the length of the area, the width of the area, the perimeter of the area, or the like, which is data for measuring the area size of the image area, which is not limited herein. The area size of the image area C may be, for example, the length of the image area C, i.e., 1000 pixels.
In one embodiment, the method for performing instance segmentation processing on the target image and determining the target image area of the target instance in the target image may further include 1) performing instance segmentation processing on the target image and determining the image area of each instance in the target image in at least one instance, 2) determining a first image area in the determined at least one image area, wherein the area size of the first image area is larger than the area size of each second image area, the second image area is an image area except the first image area in the at least one image area, 3) obtaining the ratio of the area size of each second image area to the area size of the first image area, 4) taking the second image area and the first image area with the ratio larger than a second preset ratio threshold as preselected target image areas, 5) obtaining the position information of each preselected target image area in the target image, and 6) taking each preselected target image area as the target image area if the position information of each preselected target image area is determined to be located in the center area of the target image based on the position information of each preselected target image area.
Alternatively, the image area may be pre-selected by the area size mode and then the target image area may be determined by the center area mode, or the area size mode and the center area mode may be used simultaneously to determine the target image area, and when the image area is determined to be the target image area by both modes, the image area is determined to be the target image area, which is not limited herein.
S202, performing text detection processing on the target image area to obtain at least one original text box, and determining the original text box area of each original text box in the target image.
In the embodiment of the present application, the text detection processing may be performed on the target image by using a trained text detection model. Illustratively, the text detection model may be a deep learning model such as DBNet (a differentiable binary segmentation network for text detection), CTPN (a network for text detection including a convolutional neural network and a cyclic neural network), segLink (a convolutional neural network capable of detecting text with a rotation angle), and the like, and different models may be flexibly selected based on different requirements, which is not limited herein. The training process of the text detection model is a technical means familiar to those skilled in the art, and is not described herein.
In the embodiment of the application, in order to facilitate the subsequent determination that the target image area overlaps with the original text box area, the size of the text box area of the original text box obtained after the text detection processing is larger than or slightly larger than the position of text data corresponding to the original text box in the target image area.
And S203, if a target original text box area partially overlapped with the target image area exists, determining that the text data of the target image area is incomplete.
In the embodiment of the application, the mode of judging whether the target image area and the original text box area are partially overlapped can be that the area position of the target image area in the target image and the position information of the original text box area in the target image are determined, the position of four corner points of the original text box area in the target image is determined through position analysis processing on the position information, if the position of part of the corner points in the four corner points falls into the area position of the target image area and the position of part of the corner points does not fall into the area position of the target image area, the target image area and the original text box area are determined to be partially overlapped. The corner points are used for indicating intersection points obtained by intersecting every two boundaries of the original text box.
For example, referring to fig. 5, fig. 5 shows a schematic view of partial overlapping of regions, after performing an instance segmentation process on a target image 501, an image region 502 and an image region 503 are obtained, and the image region 502 is determined as the target image region. Then, after the text detection processing is performed on the target image 501, 9 original text boxes as shown in fig. 5 and original text box areas of the original text boxes are obtained, and meanwhile, positions of four corner points of the original text box areas in the target image are obtained. Wherein the position of the corner 505 of the original text box 504 does not fall into the target image area 502, and the positions of the other corners of the original text box 504 fall into the target image area 502, therefore, it may be determined that the original text box 504 partially overlaps with the target image area, and the original text box 504 is the target original text box.
Alternatively, it is also possible to determine whether there is an original text box in which a partial region falls in the target image region and another partial region does not fall in the target image region by the position information of the original text box and the position information of the target image region, so that there is an original text box region partially overlapping with the target image region. Alternatively, it may be determined in other ways whether there is an original text box region partially overlapping with the target image region, which is not limited herein.
In one embodiment, the method further includes performing instance segmentation processing on the target image, determining at least one instance after the image area in the target image, performing text detection processing on at least one image area, determining whether a text box area partially overlapped with the image area exists in the image area, if so, determining that text data of the image area is incomplete, then determining whether the image area belongs to the target image area by means of the area size or the central area as mentioned in step S201, if not, determining that text data in the target image is complete, and if so, determining that text data in the target image is incomplete.
In the embodiment of the application, an example segmentation process is carried out on a target image to determine a target image area of a target example in the target image, text detection process is carried out on the target image area to obtain at least one original text box, the original text box area of each original text box in the target image is determined, and if the target original text box area partially overlapped with the target image area exists, incomplete text data of the target image area is determined. According to the embodiment of the application, whether text data in the target image area is complete can be judged by judging the target image area and whether the original text box areas of the original text boxes obtained by carrying out text detection processing on the target image area are partially overlapped; in addition, since no template is used or other external conditions are used in the integrity detection process, the method can be suitable for detecting the integrity of the text data in different complex scenes, has stronger universality, can flexibly detect the integrity of the text data, and further improves the universality and the reliability of the integrity detection of the text data.
Based on the text data integrity detection scheme and the text data integrity detection system, the embodiment of the application provides another text data integrity detection method. Referring to fig. 6, a flowchart of another method for detecting integrity of text data according to an embodiment of the present application is shown. The integrity detection method of text data shown in fig. 6 may be performed by the server or the terminal device shown in fig. 1. The text data integrity detection method shown in fig. 6 may include the steps of:
s601, performing instance segmentation processing on a target image, and determining a target image area of a target instance in the target image.
In the embodiment of the application, the method for carrying out the instance segmentation processing on the target image and determining the target image area of the target instance in the target image can be that the instance segmentation processing is carried out on the target image and determining the image area of each instance in at least one instance in the target image, the distances between each area boundary of any one image area in the determined at least one image area and the image boundary corresponding to the area boundary in the target image are obtained, and if the distance of at least one area boundary is smaller than a second preset distance threshold, any image area is taken as the target image area.
Optionally, the second preset distance threshold may be the number of pixels, or may be other parameters for measuring the distance in the image, which is not limited herein. The second preset distance threshold may be, for example, 0.8mm, 0.9 to 1.1cm, 2 pixels, or 2 to 4 pixels, which is not limited herein.
Alternatively, the second preset distance threshold may be determined based on the number of pixels filled when the image edge of the target image is mirror-filled in step 602, for example, 3 pixels are mirror-filled for the image edge of the target image, then the second preset distance threshold may be 3 pixels, where the width of each pixel is 0.109 mm, then the second preset distance threshold may also be expressed as 0.327 mm.
Specifically, the possibility that important information such as an invoice, a bill and the like corresponding to each image area is not fully shot can be determined by judging whether the area boundary of each image area is equal to the distance between the image boundary corresponding to the area boundary in the target image, and when the area boundary of the image area is smaller than the distance between the image boundary corresponding to the image boundary, the possibility that the image area is not fully shot is higher, so that the image area needs to be determined as the target image area, and whether text data in the image area is complete is further judged.
For example, referring to fig. 7, fig. 7 shows a schematic diagram of yet another target image area. The second preset distance threshold is set to 20 pixels, and after the object image 701 is subjected to the example division processing, an image area 702 and an image area 703 are obtained. The distances of the respective region boundaries of the image region 702 from the image boundaries corresponding to the respective region boundaries in the target image are a distance a, a distance b, a distance c, and a distance d, respectively, and the distances of the respective region boundaries of the image region 703 from the image boundaries corresponding to the respective region boundaries in the target image are a distance e, a distance f, a distance g, and a distance h, respectively. Since the distance a, the distance b, the distance c, and the distance d of the image area 702 are all greater than 20 pixels, and the distance h of the image area 703 is less than 20 pixels, the image area 703 can be determined as the target image area.
In one embodiment, the method for performing the instance segmentation processing on the target image and determining the target image area of the target instance in the target image may further include performing the instance segmentation processing on the target image and determining the image area of each instance in the target image in at least one instance, determining the area size of each image area, and if the area size of the image area is greater than the preset size threshold, determining that the image area is determined to be the target image area.
Alternatively, the area size may be an area of the image area, or may be data of length, width, etc. of the image area for measuring the size, where the area, length, width, etc. of the image area may be represented by a pixel number, or may be represented by a length unit such as millimeter, centimeter, etc., which is not limited herein. Optionally, the preset size threshold may be an area, or may be data for measuring an image size, such as a length, a width, etc. of an image area, for example, the area is 30000 pixels, the length of the image area is 500 pixels, and the width of the image area is 3000 to 10000 pixels.
Illustratively, after the object image Y is subjected to the example segmentation process, an image area Y1, an image area Y2, and an image area Y3 are obtained. Then, the area of each image area is calculated from the position information of each image area in the target image, and it is possible to determine that the area of the image area y1 is 180000 pixels, the area of the image area y2 is 120000 pixels, and the area of the image area y3 is 30000 pixels, and since the preset size threshold is 100000 pixels, it is possible to determine that the target image area is the image area y1 and the image area y2.
In one embodiment, the method for performing instance segmentation processing on the target image to determine the target image area of the target instance in the target image may further include determining the pre-selected target image area by determining whether the distance between the area boundary of the image area and the corresponding image boundary is greater than a second preset distance threshold, and then determining the final target image area by determining whether the area size of the pre-selected image area is greater than a preset size threshold, or determining the pre-selected target image area by determining whether the area size of the pre-selected image area is greater than a preset size threshold, and then determining the final target image area by determining whether the distance between the area boundary of the image area and the corresponding image boundary is greater than a second preset distance threshold.
Optionally, the example segmentation process may be further performed on the target image from the steps S301 and S201, and one or more of the ways of determining the target image area of the target example in the target image may be selected to determine the target image area of the target example in the target image, which is not limited herein.
Specifically, in the process of determining the target image area, unnecessary or unimportant image areas obtained after some example segmentation processing can be screened out, so that the efficiency of integrity detection of text data is improved, and the accuracy of integrity detection of subsequent text data is improved.
S602, performing mirror image filling processing on the image edge of the target image to obtain a filled image.
S603, determining a filling image area corresponding to the target image area in the filling image.
In the embodiment of the present application, the mirror image filling processing on the image edge of the target image refers to taking the pixel values of the pixel points of the four image boundaries of the target image, which are symmetrical, and the pixel values of the pixel points of the four vertices of the target image, and performing corresponding filling on the target image, thereby obtaining a filled image. The method for determining the filling image area corresponding to the target image area in the filling image may be that the filling image is subjected to instance segmentation processing, the filling image area corresponding to the target instance in the filling image is determined, and when the filling image area and the target instance corresponding to the target image area are the same, the filling image area is determined to be the filling image area corresponding to the target image area.
For example, referring to fig. 8a, a schematic diagram of a filling image is shown, and after performing an instance segmentation process on a target image 801, it may be determined that image areas of 3 instances in the target image include an image area 802, an image area 803, and an image area 804, where the target image areas are the image area 802 and the image area 803. Then, the image edge of the target image 801 is subjected to mirror image filling processing to obtain a filled image 805, and then, the filled image 805 is subjected to instance segmentation processing to determine that a filled image region corresponding to the image region 802 in the filled image 805 is an image region 806, a filled image region corresponding to the image region 803 is an image region 807, and a filled image region corresponding to the image region 804 is an image region 808.
S604, performing text detection processing on the filled image area to obtain at least one filled text box, and filling the filled text box area of each filled text box in the filled image.
In the embodiment of the present application, the manner of performing text detection processing on the filled image area to obtain the filled text box and the filled text box area may refer to the manner of performing text detection processing on the target image area to obtain the original text box and the original text box area in step S202, which is not described herein in detail.
S605 determines that the text data in the target image area is incomplete if there is a target filled text box area that partially overlaps the target image area.
In the embodiment of the application, the mode of judging whether the target filled text box area is partially overlapped with the target image area can be specifically that the first position information of the target image area in the filled image and the second position information of each filled text box area in the filled image are acquired, and whether each filled text box area is partially overlapped with the target image area is judged according to the first position information and the second position information of each filled text box area.
Referring to fig. 8b, another partially overlapping schematic illustration is shown. Text detection processing is performed on the filled image area 806 and the filled image area 807 corresponding to the target image in the filled image 805, resulting in filled text box areas such as a filled text box area 809, a filled text box area 810, a filled text box area 811, and a filled text box area 812. The positions of the filled image 805 and the target image 801 are subjected to the registration processing, as shown in an image 813, the positions of the image area 802 and the image area 803, which are target image areas, in the filled image 805 can be determined, and then by filling text box areas of filled text boxes in the filled image such as a filled text box 809, a filled text box 810, a filled text box 811, and a filled text box 812, the target filled text box areas partially overlapping the target image areas can be determined as filled text box 809, filled text box 810, filled text box 811, and filled text box areas of filled text box 812 in the filled image.
In one embodiment, after determining that there is a target filled text box region that partially overlaps the target image region, it may also be:
1) Determining a target intersection point in an intersection point of the region boundary of the target filled text box region and the region boundary of the target image region;
2) Obtaining the distance between the target intersection point and the target image boundary of the filling image, wherein the target image boundary is matched with a text box sub-region which is not overlapped with the target image region in the target filling text box region;
3) And if the distance is greater than or equal to the first preset distance threshold, determining the factor causing incomplete text data of the target image area as a second factor. Alternatively, the first preset distance threshold may be a length, or may be the number of pixels, which is not limited herein. The first preset distance threshold may be, for example, 0.8mm, 0.9 to 1.1cm, 2 pixels, or 2 to 4 pixels, or the like, which is not limited herein.
Optionally, the matching of the target image boundary with the text box sub-region of the target filled text box region that does not overlap the target image region means that the distance between the first filled text box sub-region in the target filled text box region and the target image boundary is smaller than the distance between the second filled text box sub-region in the target filled text box region and the target image boundary, and the first filled text box sub-region refers to the text sub-region of the target filled text box region that does not overlap the target image region and the second filled text box sub-region refers to the text sub-region of the target filled text box that overlaps the target image region.
Optionally, the first factor refers to the fact that text data of the target image area is missing due to the fact that the whole target instance is not shot in the shooting process, and the second factor refers to the fact that the text data of the target image area is blocked due to the fact that other image areas or the target instance corresponding to the target image area is folded.
Alternatively, the target intersection point may be the intersection point nearest to the target image boundary. Alternatively, the target intersection point may not be determined, and as long as an intersection point where the region boundary of the target filled text box region intersects with the region boundary of the target image region has a distance smaller than the first preset distance threshold value, a factor causing incomplete text data of the target image region is determined as a first factor, and if not, a factor causing incomplete text data of the target image region is determined as a second factor.
For example, referring to fig. 8c, a schematic diagram of a text data incomplete factor is shown. The first preset distance threshold is set to 15 pixels. For the image area 802 that is the target image area, as shown by the image 814, it may be determined that the intersection points at which the area boundary of the filled text box area of the filled text box 812 intersects with the area boundary of the image area 802 include an intersection point 814 and an intersection point 815. At the same time, the image boundary of the filled image that matches the text box sub-region in the filled text box region of the filled text box 812 that does not overlap the image region 802 may also be determined as image boundary M1. Then, the distance A between the intersection point 814 and the image boundary M1 is smaller than the distance B between the intersection point 815 and the image boundary M1, so that the intersection point 814 can be determined as a target intersection point, and then the distance A is determined to be 38 pixels and greater than 15 pixels through the position information of the filled text box region of the filled text box 812 and the position information of the image region 802 in the filled image, so that finally, the factor of incomplete text data of the image region 802 can be determined to be a second factor, namely, the text data is blocked. The processing steps of the filled text box 811 in the image area 802 are the same as the processing steps of the filled text box 812 described above, so that the factor of incomplete text data in the image area 802 may also be determined as the second factor, which is not described herein.
For the image area 803 as the target image area, as shown in fig. 8c, it can be determined that the intersection points at which the area boundary of the filled text box area of the filled text box 809 intersects with the area boundary of the image area 803 include an intersection point 816 and an intersection point 817. At the same time, it is also possible to determine the image boundary of the filled image that matches the text box sub-region that does not overlap with the image region 803 in the filled text box region of the filled text box 809 as the image boundary M2. Then, from the positional information of the filled text box region of the filled text box 809 and the positional information of the image region 803 in the filled image, it can be determined that the distance C between the intersection 816 and the image boundary M2 is 10 pixels in size as the distance D between the intersection 817 and the image boundary M2, and therefore both the intersection 816 and the intersection 817 can be regarded as target intersections. Since 10 pixels are smaller than 15 pixels, it is finally possible to determine that the factor of incomplete text data in the image area 803 is the first factor, that is, text data missing caused by text data not being entirely photographed. The processing steps of the filled text box 810 in the image area 803 are the same as the processing steps of the filled text box 809 described above, so that the factor of incomplete text data in the image area 802 may also be determined as the first factor, which is not described here.
In a specific implementation, after the step S601 to step S605 are executed by using a plurality of target images as samples through the integrity detection algorithm of the text data, the integrity detection results shown in table 3 can be obtained, and it can be found through table 3 that the misjudgment of the overall integrity detection result is very low in the 3 detection result types, so that the method has higher accuracy in the integrity detection of the text data and stronger generalization capability in the detection process, and is suitable for wide application.
TABLE 3 Table 3
In the embodiment of the application, firstly, an example segmentation process is carried out on a target image to determine a target image area of the target example in the target image, secondly, mirror image filling process is carried out on the image edge of the target image to obtain a filling image, a filling image area corresponding to the target image area is determined in the filling image, secondly, text detection process is carried out on the filling image area to obtain at least one filling text box and filling text box areas of all the filling text boxes in the filling image, and finally, if the target filling text box areas partially overlapped with the target image area exist, incomplete text data in the target image area is determined. The embodiment of the application obtains the filling image by carrying out mirror image filling processing on the image edge of the target image, then judges whether the filling text box area of each filling text box obtained in the corresponding filling image area of the target image area in the filling image is partially overlapped with the target image area, can effectively determine whether text data in the target image area close to the image edge of the target image is complete or not, and can flexibly detect the integrity of the text data because no template is used in the integrity detection process or other external conditions are used, thereby improving the universality and the reliability of the integrity detection of the text data. In addition, the factor causing incomplete text data of the target image area can be determined by acquiring the distance between the target intersection point and the target image boundary of the filling image and judging whether the distance is larger than a first preset distance threshold, so that the corresponding modification of the target image by a subsequent user can be facilitated, and the user experience is improved.
Based on the text data integrity detection scheme and the text data integrity detection system, the embodiment of the application provides a text data integrity detection method. Referring to fig. 9, a flow chart of a text data integrity detection method according to an embodiment of the present application is shown. The integrity detection method of text data shown in fig. 9 may be performed by the server and the terminal device shown in fig. 1. The text data integrity detection method shown in fig. 9 may include the steps of:
S901, the terminal device transmits the target image to the server.
In the embodiment of the present application, the terminal device may send the target image to the server by wireless communication or wired communication, or may send the target image after encryption, or may send the target image through other methods, which is not limited herein.
S902, the server performs instance segmentation processing on the target image, and determines a target image area of the target instance in the target image.
S903, the server performs text detection processing on the target image area to obtain at least one original text box, and determines the original text box area of each original text box in the target image.
The specific embodiments of step S902 to step S903 may refer to the specific embodiments of step S201 to step S202, which are not described herein.
S904, the server determines an original text box area partially overlapping with the target image area as a target original text box area.
The specific embodiment of the determination target original text box area determined in step S904 may refer to the specific embodiment of the manner of determining whether the target image area and the original text box area are partially overlapped in step S203, which is not described herein.
S905, the server determines a first original text box sub-region and a second original text box sub-region in the target original text box region.
In the embodiment of the application, the first original text box sub-area refers to a text box sub-area where the target original text box area and the target image area are not overlapped, and the second original text box sub-area refers to a text box sub-area where the target original text box area and the target image area are overlapped.
In the embodiment of the application, the first original text box sub-area and the second original text box sub-area in the target original text box area can be determined by acquiring third position information of the target original text box area in the target image and fourth position information of the target image area in the target image, and then determining the first original text box sub-area, in which the target original text box area is overlapped with the target image area, and the second original text box sub-area, in which the target original text box area is not overlapped with the target image area, based on the third position information and the fourth position information. The method comprises the steps of establishing a two-dimensional coordinate system based on a target image, determining the coordinates of a target original text box area and the coordinates of a target image area, and then carrying out corresponding mathematical calculation on the coordinates of the target original text box area and the coordinates of the target image area to obtain the coordinates of a first original text box sub-area and the coordinates of a second original text box sub-area, thereby completing the determination of the first original text box sub-area and the second original text box sub-area in the target original text box area.
S906, the server obtains a ratio of the region size of the first original text box sub-region to the region size of the second original text box sub-region.
S907, if the ratio is smaller than the first preset ratio threshold, the server determines that the text data of the target image area is incomplete.
In the embodiment of the present application, the area size may be any data for measuring the area size, such as an area, a length, a width, a perimeter, and a pixel number, which is not limited herein. The method for obtaining the ratio of the area size of the first original text box sub-area to the area size of the second original text box sub-area may be that third position information of the target original text box area in the target image and fourth position information of the target image area in the target image are obtained, then the area size of the first original text box sub-area and the area size of the second original text box sub-area in the target original text box area are determined based on the third position information and the fourth position information, and then the ratio is determined based on the area size of the first original text box sub-area and the area size of the second original text box sub-area.
In one embodiment, the ratio of the area size of the first original text box sub-area to the area size of the second original text box sub-area is obtained by obtaining third position information of the target original text box area in the target image and fourth position information of the target image area in the target image, then determining the first original text box sub-area and the second original text box sub-area of the target original text box area based on the third position information and the fourth position information, then determining the area size of the first original text box sub-area based on the size of the pixel point in the first original text box sub-area, and determining the area size of the second original text box sub-area based on the size of the pixel point in the second original text box sub-area.
In the embodiment of the application, if the ratio is smaller than the first preset ratio threshold, the original text box of the target belongs to the target image area, so that incomplete text data of the target image area can be determined, and if the ratio is larger than the first preset ratio threshold, the original text box of the target does not belong to the target image area, and possibly text boxes of other image areas protrude into the target image area, so that complete text data of the target image area can be determined. Optionally, the first preset ratio threshold may be a fraction, a percentage, or the like, such as 0.7, 1/2, 80%, 50% -80%, or the like.
For example, referring to fig. 10, fig. 10 shows a schematic diagram of an original text box that does not belong to the target image area. After the example division processing is performed on the target image 1001 by setting the first preset scale threshold to 0.4, it may be determined that the image areas corresponding to 2 examples are the image area 1002 and the image area 1003, respectively, where the image area 1002 may be determined as the target image area by means of the area size. Then, the image area 1002 is subjected to text detection processing, and since the position of the original text box 1004 in the image area 1003 is very close to the position of the image area 1002, the original text box 1004 is detected during detection and is detected as a target original text box partially overlapping with the image area 1002. Thus, after determining the target original text box of the image area 1002, it is also necessary to determine whether the target original text box belongs to the image area 1002. At this time, by the position information of the original text box 1004, the area of the first original text box sub-area 1006 of the original text box 1004 is 8000 pixels and the area of the second original text box sub-area 1005 is 1000 pixels, so that the ratio can be determined to be 8. Since 8 is greater than 0.5, it can be determined that the original text box 1004 does not belong to the image area 1002, and that there are no other original text boxes in the image area 1002 that partially overlap the image area 1002, and thus, it can be determined that the text data of the image area 1002 is complete.
In one embodiment, it is also possible to 1) determine an original text box region that completely overlaps with the target image region in at least one original text box region, 2) obtain an angle difference between the determined angle of inclination of the original text box region and the angle of inclination of the target original text box region, and 3) determine that text data of the target image region is incomplete if the angle difference is less than a preset angle threshold. Specifically, if the target original text box area belongs to the target image area, the inclination angle of the target original text box area in the filling image should be approximately the same as the inclination angle of the original text box area completely overlapped with the target image area, so as to determine whether the text data of the target image area is complete, effectively avoid the situation of misjudgment, and be beneficial to improving the accuracy of integrity detection of the text data.
For example, a two-dimensional coordinate system may be established based on the target image, with the preset angle threshold set to 5 degrees. And then calculating a first inclination angle of 10 degrees based on the first coordinate information and a second inclination angle of 68 degrees based on the second coordinate information, so that the angle difference is 58 degrees, thereby determining that the target original text box region N does not belong to the target image region and further determining that the text data of the target image region is complete.
In one embodiment, the number of the original text box areas completely overlapped with the target image area may be plural, at this time, an average inclination angle of all the inclination angles of the original text box areas completely overlapped with the target image area may be calculated, the average inclination angle is compared with the inclination angle of the target original text box area to obtain an angle difference value, and finally the angle difference value is compared with a preset angle threshold value.
In one embodiment, when there are a plurality of original text box areas that completely overlap with the target image area, an angle difference between the inclination angle of the target original text box area and the inclination angle of each original text box area that completely overlaps with the target image area may also be determined, and if each angle difference is smaller than a preset angle threshold, it is determined that the target original text box area belongs to the target image area.
Alternatively, when there are a plurality of original text box areas that completely overlap with the target image area, the inclination angle may be compared in other ways to determine whether the target original text box area belongs to the target image area, which is not limited herein.
S908, the server generates detection result information.
In the embodiment of the application, the detection result information can be used for indicating whether the text data of the target image is complete or not, and when the text data is incomplete, the detection result information can also indicate which specific target image area in the target image corresponds to the incomplete text data in the target instance and which factors of the incomplete text data are included. Optionally, the detection result information may further include other information for determining whether the text data is complete, which is not limited herein.
For example, after the integrity of the text data is detected in the target image area M2 corresponding to the invoice and the target image area M3 corresponding to the invoice in the target image M1, it may be determined that the text data in the target image area M2 is incomplete and the factor causing the incomplete text data is "not shot", and the text data in the target image area M3 is complete. Therefore, the generated detection result information may be "text in the target image M1 is incomplete and is required to be uploaded again", or "text in the target image M1 is not taken completely and is required to be uploaded again".
S909, the server transmits the detection result information to the terminal device.
The manner of sending the detection result information to the terminal device in step S909 is a technical means familiar to those skilled in the art, and is not described herein in detail.
In the embodiment of the application, a target image is sent to a server by a terminal device, then the target image is subjected to instance segmentation processing by the server to determine a target image area of a target instance in the target image, the server carries out text detection processing on the target image area to obtain at least one original text box, the original text box areas of all the original text boxes in the target image are determined, then the server determines the original text box areas partially overlapped with the target image area as target original text box areas, determines a first original text box sub-area and a second original text box sub-area in the target original text box areas, then the server obtains the ratio of the area size of the first original text box sub-area to the area size of the second original text box sub-area, if the ratio is smaller than a first preset proportion threshold, the server determines that text data of the target image area is incomplete, and finally the server generates detection result information and sends the detection result information to the terminal device. According to the embodiment of the application, whether the target original text box area partially overlapped with the target image area belongs to the target image area or not can be determined through the ratio of the area size of the first original text box sub-area to the area size of the second original text box sub-area, and for the target original text box area which does not belong to the target image area, the text data of the target image area is not considered to be incomplete, so that the situation of misjudgment is effectively avoided, and the accuracy of integrity detection of the text data is improved. In addition, because no template is used in the text data integrity detection process or other external conditions are used, the text data integrity detection method has strong universality, and therefore the text data integrity can be flexibly detected, and the universality and the reliability of the text data integrity detection are improved.
Based on the embodiment of the text data integrity detection method, the embodiment of the application provides a text data integrity detection device. Referring to fig. 11, a schematic structural diagram of a text data integrity detection device according to an embodiment of the present application may include a processing unit 1101. The text data integrity detection apparatus shown in fig. 11 may operate as follows:
the processing unit 1101 is configured to perform an instance segmentation process on a target image, and determine a target image area of a target instance in the target image;
the processing unit 1101 is further configured to perform text detection processing on the target image area to obtain at least one original text box, and determine an original text box area of each original text box in the target image;
the processing unit 1101 is further configured to determine that text data of the target image area is incomplete if there is a target original text box area partially overlapping the target image area.
In one embodiment, the processing unit 1101 is further configured to perform mirror image filling processing on an image edge of the target image to obtain a filled image, determine a filled image area corresponding to the target image area in the filled image, perform text detection processing on the filled image area to obtain at least one filled text box and a filled text box area of each filled text box in the filled image, and determine that text data in the target image area is incomplete if there is a target filled text box area partially overlapping the target image area.
In one embodiment, the processing unit 1101 is further configured to determine a target intersection point in an intersection point where a region boundary of the target filled text box region intersects a region boundary of the target image region, obtain a distance between the target intersection point and a target image boundary of the filled image, match a text box sub-region in the target filled text box region that is not overlapped with the target image region, determine a factor that causes incomplete text data of the target image region as a first factor if the distance is less than a first preset distance threshold, and determine a factor that causes incomplete text data of the target image region as a second factor if the distance is greater than or equal to the first preset distance threshold.
In one embodiment, the processing unit 1101 is further configured to perform an instance segmentation process on the target image, determine an image area of each instance in the target image in at least one instance, obtain a distance between each area boundary of any one of the determined at least one image area and an image boundary corresponding to the area boundary in the target image, and if the distance between the at least one area boundary is less than a second preset distance threshold, use any one of the image areas as the target image area.
In one embodiment, the processing unit 1101 is further configured to determine an original text box area that completely overlaps with the target image area in the at least one original text box area, obtain an angle difference between the determined inclination angle of the original text box area and the inclination angle of the target original text box area, and determine that text data of the target image area is incomplete if the angle difference is smaller than a preset angle threshold.
In one embodiment, the processing unit 1101 is further configured to determine a first original text box sub-area and a second original text box sub-area in the target original text box area, where the first original text box sub-area refers to a text box sub-area where the target original text box area is not overlapped with the target image area, and the second original text box sub-area refers to a text box sub-area where the target original text box area is overlapped with the target image area, obtain a ratio of an area size of the first original text box sub-area to an area size of the second original text box sub-area, and determine that text data of the target image area is incomplete if the ratio is smaller than a first preset ratio threshold.
In one embodiment, the processing unit 1101 is further configured to perform an instance segmentation process on the target image, determine an image area of each instance in the target image in at least one instance, obtain location information of each image area in the target image, and if it is determined that each image area is located in a central area of the target image based on the location information of each image area, use each image area as the target image area.
In one embodiment, the processing unit 1101 is further configured to perform an instance segmentation process on the target image, determine an image area of each instance in the target image in at least one instance, determine a first image area in the determined at least one image area, the area size of the first image area being greater than the area size of each second image area, the second image area being an image area of the at least one image area other than the first image area, obtain a ratio of the area size of each second image area to the area size of the first image area, and regard the second image area and the first image area, the ratio of which is greater than a second preset ratio threshold, as the target image area.
According to one embodiment of the present application, the steps involved in the text data integrity detection method shown in fig. 2, 6 and 9 may be performed by the respective units in the text data integrity detection apparatus shown in fig. 11.
According to another embodiment of the present application, each unit in the text data integrity detection apparatus shown in fig. 11 may be separately or completely combined into one or several additional units, or some unit(s) thereof may be further split into a plurality of units with smaller functions, which may achieve the same operation without affecting the achievement of the technical effects of the embodiment of the present application. The above units are divided based on logic functions, and in practical applications, the functions of one unit may be implemented by a plurality of units, or the functions of a plurality of units may be implemented by one unit. In other embodiments of the present application, the text data integrity detection device based on logic function division may also include other units, and in practical applications, these functions may also be implemented with assistance from other units, and may be implemented by cooperation of multiple units.
According to another embodiment of the present application, an integrity detection apparatus for text data as shown in fig. 11 may be constructed by running a computer program (including program code) capable of executing the steps involved in the respective methods as shown in fig. 2,6 and 9 on a general-purpose computing device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read only storage medium (ROM), and the like, and a storage element, and the method of detecting the integrity of text data of the embodiment of the present application is implemented. The computer program may be recorded on, for example, a computer readable storage medium, and loaded into and executed by the computing device described above.
In the embodiment of the application, an example segmentation process is carried out on a target image to determine a target image area of a target example in the target image, text detection process is carried out on the target image area to obtain at least one original text box, the original text box area of each original text box in the target image is determined, and if the target original text box area partially overlapped with the target image area exists, incomplete text data of the target image area is determined. The method can judge whether the text data in the target image area is complete or not by judging the target image area and carrying out text detection processing on the target image area to obtain the original text box area of the original text box, and meanwhile, the integrity of the text data can be flexibly detected by the method because no template or other external conditions are used in the integrity detection process of the text data, so that the universality and the reliability of the integrity detection of the text data are improved.
Based on the method embodiment and the device embodiment, the application further provides electronic equipment. Referring to fig. 12, a schematic structural diagram of an electronic device according to an embodiment of the present application is provided. The electronic device shown in fig. 12 may include at least a processor 1201, an input interface 1202, an output interface 1203, and a computer storage medium 1204. Wherein the processor 1201, the input interface 1202, the output interface 1203, and the computer storage medium 1204 may be connected by a bus or other means.
The computer storage medium 1204 may be stored in a memory of an electronic device, the computer storage medium 1204 for storing a computer program comprising program instructions, and the processor 1201 for executing the program instructions stored by the computer storage medium 1204. The processor 1201 (or CPU (Central Processing Unit, central processing unit)) is a computing core and a control core of the electronic device, which are adapted to implement one or more instructions, in particular to load and execute one or more instructions to implement the above-described text data integrity detection method flow or corresponding functions.
The embodiment of the application also provides a computer storage medium (Memory), which is a Memory device in the electronic device and is used for storing programs and data. It will be appreciated that the computer storage medium herein may include both a built-in storage medium in the terminal and an extended storage medium supported by the terminal. The computer storage medium provides a storage space that stores an operating system of the terminal. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), adapted to be loaded and executed by the processor 1201. It should be noted that the computer storage medium may be a high-speed random access memory (random access memory, RAM) memory, or may be a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory, or may alternatively be at least one computer storage medium located remotely from the foregoing processor.
In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by the processor 1201 to implement the corresponding steps of the method described above in connection with the text data integrity detection method embodiments of fig. 2, 6, and 9, in particular, the one or more instructions in the computer storage medium are loaded and executed by the processor 1201 to:
the processor 1201 performs an instance segmentation process on the target image to determine a target image area of the target instance in the target image;
the processor 1201 performs text detection processing on the target image area to obtain at least one original text box, and determines an original text box area of each original text box in the target image;
the processor 1201 determines that the text data of the target image area is incomplete if there is a target original text box area partially overlapping the target image area.
In one embodiment, the processor 1201 performs a mirror filling process on the image edge of the target image to obtain a filled image;
The processor 1201 determines a fill image region corresponding to the target image region in the fill image;
The processor 1201 performs text detection processing on the filled image area to obtain at least one filled text box, and filled text box areas of each filled text box in the filled image;
The processor 1201 determines that text data in the target image area is incomplete if there is a target filled text box area that partially overlaps the target image area.
In one embodiment, the processor 1201 determines a target intersection point in an intersection point where a region boundary of the target filled text box region intersects a region boundary of the target image region;
the processor 1201 acquires a distance between a target intersection point and a target image boundary of the filling image, the target image boundary being matched with a text box sub-region in the target filling text box region that is not overlapped with the target image region;
The processor 1201 determines that if the distance is smaller than the first preset distance threshold, a factor causing incomplete text data of the target image area is determined as a first factor;
The processor 1201 determines that the factor causing the incomplete text data of the target image area is the second factor if the distance is greater than or equal to the first preset distance threshold.
In one embodiment, the processor 1201 performs an instance segmentation process on the target image to determine a target image area of the target instance in the target image, including:
the processor 1201 performs an instance segmentation process on the target image, and determines an image area of each instance in the target image in at least one instance;
the processor 1201 acquires distances between respective region boundaries of any one of the determined at least one image region and image boundaries corresponding to the region boundaries in the target image;
The processor 1201 determines that if the distance between the at least one region boundary is less than the second preset distance threshold, any image region is taken as the target image region.
In one embodiment, the processor 1201 determines an original text box region that completely overlaps with the target image region in at least one original text box region;
The processor 1201 acquires an angle difference between the determined inclination angle of the original text box area and the inclination angle of the target original text box area;
If the angle difference is smaller than the preset angle threshold, the processor 1201 determines that the text data of the target image area is incomplete.
In one embodiment, the processor 1201 determines a first original text box sub-region in the target original text box region, the first original text box sub-region referring to a text box sub-region in which the target original text box region does not overlap with the target image region, and a second original text box sub-region referring to a text box sub-region in which the target original text box region overlaps with the target image region;
The processor 1201 obtains a ratio of the region size of the first original text box sub-region to the region size of the second original text box sub-region;
The processor 1201 determines that the text data of the target image area is incomplete if the ratio is less than the first preset ratio threshold.
In one embodiment, the processor 1201 performs an instance segmentation process on the target image to determine a target image area of the target instance in the target image, including:
the processor 1201 performs an instance segmentation process on the target image, and determines an image area of each instance in the target image in at least one instance;
The processor 1201 acquires position information of each image area in the target image;
If the processor 1201 determines that each image area is located in the center area of the target image based on the position information of each image area, each image area is regarded as the target image area.
In one embodiment, the processor 1201 performs an instance segmentation process on the target image to determine a target image area of the target instance in the target image, including:
the processor 1201 performs an instance segmentation process on the target image, and determines an image area of each instance in the target image in at least one instance;
The processor 1201 determines a first image area in the determined at least one image area, the first image area having an area size larger than an area size of each of second image areas, the second image area being an image area other than the first image area among the at least one image area;
Processor 1201 obtains a ratio of the area size of each second image area to the area size of the first image area;
The processor 1201 regards the second image area and the first image area having the ratio greater than the second preset ratio threshold as target image areas.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer readable storage medium and executes the computer instructions to cause the electronic device to perform the method embodiments described above and illustrated in fig. 2, 6 and 9. The computer readable storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.