Movatterモバイル変換


[0]ホーム

URL:


CN117911444B - A method and system for cutting out images based on edge processing - Google Patents

A method and system for cutting out images based on edge processing
Download PDF

Info

Publication number
CN117911444B
CN117911444BCN202311712574.7ACN202311712574ACN117911444BCN 117911444 BCN117911444 BCN 117911444BCN 202311712574 ACN202311712574 ACN 202311712574ACN 117911444 BCN117911444 BCN 117911444B
Authority
CN
China
Prior art keywords
image
edge
cut out
pixel
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311712574.7A
Other languages
Chinese (zh)
Other versions
CN117911444A (en
Inventor
韦金宋
陶建伟
刘强
陈静文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yuelai Technology Co ltd
Original Assignee
Guangzhou Yuelai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yuelai Technology Co ltdfiledCriticalGuangzhou Yuelai Technology Co ltd
Priority to CN202311712574.7ApriorityCriticalpatent/CN117911444B/en
Publication of CN117911444ApublicationCriticalpatent/CN117911444A/en
Application grantedgrantedCritical
Publication of CN117911444BpublicationCriticalpatent/CN117911444B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention belongs to the field of image processing, and discloses a matting method and a matting system based on edge processing, wherein the method comprises the steps of S1, obtaining coordinates of positions clicked by a user on an image to be scratched, S2, obtaining image edges, numbering each image edge, S3, displaying the image edges and the numbers thereof, S4, storing coordinates of pixel points in the image edges corresponding to the numbers input by the user into a contour coordinate set, S5, judging whether the pixel points corresponding to elements in the contour coordinate set can form a closed area on the image to be scratched, if not, entering S6, if so, entering S7, S6, highlighting the pixel points in the contour coordinate set, entering S1, S7, generating a first mask layer, S8, conducting smooth processing on the edges of the first mask layer, obtaining a second mask layer, and S9, and fusing the second mask layer with the image to be scratched to obtain a matting result layer. The invention effectively improves the efficiency of the matting.

Description

Edge processing-based matting method and system
Technical Field
The invention relates to the field of image processing, in particular to a matting method and system based on edge processing.
Background
In the traditional contour matting technology, when the contour matting area edge is selected, blurring and corrosion treatment are not carried out on the edge area of the matting image, so that the mask area edge of the matting image is not smooth enough, and the display effect of the image is poor.
In addition, in the traditional contour matting technology, a user is required to click edge pixel points one by one, and then after all coordinates clicked by the user are stored, an area to be scratched can be obtained, so that the matting efficiency is low.
Disclosure of Invention
The invention aims to disclose a matting method and a matting system based on edge processing, which solve the problems of how to improve the smoothness of the edge of an image obtained after matting and improve the matting efficiency.
In order to achieve the above purpose, the invention adopts the following technical scheme:
in one aspect, the invention provides a matting method based on edge processing, which comprises the following steps:
s1, acquiring coordinates of a position clicked by a user on an image to be scratched, and storing the coordinates into a contour coordinate set;
S2, acquiring image edges based on coordinates clicked by a user on an image to be scratched, and numbering each image edge;
s3, highlighting the edges of the images on the images to be scratched, and displaying the number of each image edge;
S4, acquiring a number input by a user, and storing coordinates of pixel points in the image edge corresponding to the number into a contour coordinate set;
S5, judging whether pixel points corresponding to elements in the contour coordinate set can form a closed area on the image to be scratched, if not, entering S6, and if so, entering S7;
S6, canceling highlighting and canceling display of numbers on the image to be scratched, highlighting pixel points in the contour coordinate set, and entering S1;
s7, generating a first mask layer based on pixel points in the closed area;
s8, performing smoothing treatment on the edge of the first mask layer to obtain a second mask layer;
And S9, fusing the second mask layer with the image to be scratched to obtain a contour scratched result layer.
Optionally, before S1, the method further includes:
And acquiring an image to be scratched, and establishing a rectangular coordinate system on the image to be scratched.
Optionally, establishing a rectangular coordinate system on the image to be scratched includes:
and establishing a rectangular coordinate system by taking the lower left corner of the image to be scratched as an origin of coordinates, taking the left side edge of the image to be scratched as an X axis, and taking the lower side edge of the image to be scratched as a Y axis.
Optionally, acquiring the image edge based on coordinates clicked by the user on the image to be scratched includes:
b is used for representing a pixel point corresponding to a coordinate clicked by a user on the image to be scratched;
The image edges are acquired as follows:
S21, obtaining pixel points in the 8 adjacent areas of b, wherein the absolute value of the difference value between the pixel points and the gray value of b is smaller than a set gray value threshold value, and storing the obtained pixel points into a set to be calculated;
S22, respectively acquiring image edges corresponding to each pixel point in the set to be calculated;
s23, taking all obtained image edges as image edges.
Optionally, the obtaining an image edge corresponding to each pixel point in the set to be calculated includes:
For a pixel point d in the set to be calculated, the corresponding image edge obtaining process includes:
s220, taking the pixel point d as a contrast pixel point;
s221, acquiring a set Ucmp of pixel points, of which the absolute value of the difference value between the gray value of the 8 adjacent areas of the contrast pixel points and the gray value of the contrast pixel points is smaller than a set gray value threshold value;
S221, judging whether Ucmp is an empty set, if so, storing the contrast pixel points into an image edge set, ending calculation, and if not, entering S222;
S222, taking the pixel point which does not belong to the image edge set and has the minimum difference absolute value of the gray value between the pixel point and the contrast pixel point in Ucmp as a new contrast pixel point, and entering S220.
Optionally, numbering each image edge includes:
starting from 1, all image edges are numbered consecutively, numbered as integer numbers.
Optionally, highlighting the image edges and displaying the number of each image edge includes:
Randomly selecting a display color for each image edge, wherein the display colors corresponding to different image edges are different;
respectively modifying the colors of all pixel points corresponding to the edges of the image into corresponding display colors on the image to be scratched;
The number of the image edge is displayed below the last pixel point of the image edge.
Optionally, highlighting the pixel point in the contour coordinate set includes:
And on the image to be scratched, modifying the color of the pixel point displaying the pixel point in the contour coordinate set into a preset color.
Optionally, generating the first mask layer based on the pixel points in the enclosed area includes:
S81, generating a blank image with the same resolution as the image to be scratched, wherein the gray values of all pixel points in the blank image are set to be 0;
s82, acquiring a set Upix of coordinates of pixel points in the closed area on an image to be scratched;
S83, in the blank image, setting the gray value of the pixel point corresponding to the coordinates in the Upix to be 1, and obtaining a first mask layer.
On the other hand, the invention provides a matting system based on edge processing, which comprises a coordinate acquisition module, an image edge acquisition module, a first display control module, a number acquisition module, a judgment module, a second display control module, a first generation module, a second generation module and a matting module;
The coordinate acquisition module is used for acquiring coordinates of positions clicked by a user on the image to be scratched and storing the coordinates into the contour coordinate set;
The image edge acquisition module is used for acquiring image edges based on coordinates clicked by a user on an image to be scratched, and numbering each image edge;
the first display control module is used for highlighting the edges of the images on the images to be scratched and displaying the numbers of each image edge;
The number acquisition module is used for acquiring a number input by a user and storing coordinates of pixel points in the image edge corresponding to the number into the contour coordinate set;
the judging module is used for judging whether the pixel points corresponding to the elements in the contour coordinate set can form a closed area on the image to be scratched;
The second display control module is used for canceling the highlighting and the display of the canceling numbers on the image to be scratched when the pixel points corresponding to the elements in the contour coordinate set cannot form a closed area on the image to be scratched, and highlighting the pixel points in the contour coordinate set;
The first generation module is used for generating a first mask layer based on pixel points in a closed area when the pixel points corresponding to elements in the contour coordinate set can enclose the closed area on the image to be scratched;
the second generation module is used for carrying out smoothing treatment on the edge of the first mask layer to obtain a second mask layer;
The matting module is used for fusing the second mask layer with the image to be matting to obtain a contour matting result layer.
The beneficial effects are that:
The selection of the contour matting edge area is more intelligent and reasonable, and the edge area of the image is subjected to smoothing treatment, so that the separated image edge area is smoother, and the display effect of the image is better.
In addition, in the process of selecting the edges of the image by the user, the invention intelligently provides various possible image edges for the user through the set algorithm, and then the user can intelligently realize the automatic acquisition of the image edges only by selecting the image edges which appear, without clicking the pixels belonging to the image edges by the user, thereby effectively improving the image matting efficiency.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a matting method based on edge processing according to the present invention.
Fig. 2 is a schematic diagram of a matting system based on edge processing according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
Contour matting is one of the most common operations in image processing, drawing a closed region on an image, separating the closed region on the image into separate layers, and preparing for later image synthesis. At present, the traditional contour matting is realized by using Xfermode, and the general implementation scheme is to draw a rectangular area and then make conversion of intersection result output with the original image, so that the rectangular area on the image is separated into separate image layers.
Embodiment one:
In one embodiment shown in fig. 1, the present invention provides a matting method based on edge processing, including:
s1, acquiring coordinates of a position clicked by a user on an image to be scratched, and storing the coordinates into a contour coordinate set;
S2, acquiring image edges based on coordinates clicked by a user on an image to be scratched, and numbering each image edge;
s3, highlighting the edges of the images on the images to be scratched, and displaying the number of each image edge;
S4, acquiring a number input by a user, and storing coordinates of pixel points in the image edge corresponding to the number into a contour coordinate set;
S5, judging whether pixel points corresponding to elements in the contour coordinate set can form a closed area on the image to be scratched, if not, entering S6, and if so, entering S7;
S6, canceling highlighting and canceling display of numbers on the image to be scratched, highlighting pixel points in the contour coordinate set, and entering S1;
s7, generating a first mask layer based on pixel points in the closed area;
s8, performing smoothing treatment on the edge of the first mask layer to obtain a second mask layer;
And S9, fusing the second mask layer with the image to be scratched to obtain a contour scratched result layer.
Compared with the prior art, in the image obtained by the matting, the edge area is selected more intelligently and reasonably, and the edge area of the image is subjected to smoothing treatment, so that the edge area of the separated image is smoother, and the display effect of the image is better.
In addition, in the process of selecting the edges of the image by the user, the invention intelligently provides various possible image edges for the user through the set algorithm, and then the user can intelligently realize the automatic acquisition of the image edges only by selecting the image edges which appear, without clicking the pixels belonging to the image edges by the user, thereby effectively improving the image matting efficiency.
Furthermore, the image edge which can be selected is generated for the user through the algorithm, so that the accuracy of the user in edge contour selection is improved.
Optionally, before S1, the method further includes:
And acquiring an image to be scratched, and establishing a rectangular coordinate system on the image to be scratched.
Specifically, in the present invention, all coordinates are coordinates in a rectangular coordinate system.
Optionally, establishing a rectangular coordinate system on the image to be scratched includes:
and establishing a rectangular coordinate system by taking the lower left corner of the image to be scratched as an origin of coordinates, taking the left side edge of the image to be scratched as an X axis, and taking the lower side edge of the image to be scratched as a Y axis.
Further, the left side here means the leftmost side, and the lower side means the lowest value side.
Optionally, acquiring the image edge based on coordinates clicked by the user on the image to be scratched includes:
b is used for representing a pixel point corresponding to a coordinate clicked by a user on the image to be scratched;
The image edges are acquired as follows:
S21, obtaining pixel points in the 8 adjacent areas of b, wherein the absolute value of the difference value between the pixel points and the gray value of b is smaller than a set gray value threshold value, and storing the obtained pixel points into a set to be calculated;
S22, respectively acquiring image edges corresponding to each pixel point in the set to be calculated;
s23, taking all obtained image edges as image edges.
Specifically, the set gray value threshold is 3.
By setting the gray value threshold, the pixel points with very high similarity with the pixel point b are all stored in the set to be calculated, and then all the image edges meeting the requirements can be obtained according to the set to be calculated.
Optionally, the obtaining an image edge corresponding to each pixel point in the set to be calculated includes:
For a pixel point d in the set to be calculated, the corresponding image edge obtaining process includes:
s220, taking the pixel point d as a contrast pixel point;
s221, acquiring a set Ucmp of pixel points, of which the absolute value of the difference value between the gray value of the 8 adjacent areas of the contrast pixel points and the gray value of the contrast pixel points is smaller than a set gray value threshold value;
S221, judging whether Ucmp is an empty set, if so, storing the contrast pixel points into an image edge set, ending calculation, and if not, entering S222;
S222, taking the pixel point which does not belong to the image edge set and has the minimum difference absolute value of the gray value between the pixel point and the contrast pixel point in Ucmp as a new contrast pixel point, and entering S220.
In the above-mentioned acquisition process, the present invention constantly changes the positions of the contrast pixel points, so that the image edge can extend along the possible edge direction, thereby obtaining the image edge.
Optionally, numbering each image edge includes:
starting from 1, all image edges are numbered consecutively, numbered as integer numbers.
Optionally, highlighting the image edges and displaying the number of each image edge includes:
Randomly selecting a display color for each image edge, wherein the display colors corresponding to different image edges are different;
respectively modifying the colors of all pixel points corresponding to the edges of the image into corresponding display colors on the image to be scratched;
The number of the image edge is displayed below the last pixel point of the image edge.
Specifically, for example, if there are a total of 5 image edges, the corresponding number is an integer of 1 to 5. One at a time from the image edges that are not numbered, then numbered until all image edges are numbered.
In particular, the display colors should be relatively different from each other, so that the user can recognize different image edges at a burst.
Optionally, highlighting the pixel point in the contour coordinate set includes:
And on the image to be scratched, modifying the color of the pixel point displaying the pixel point in the contour coordinate set into a preset color.
Specifically, the preset color may be a color different from all display colors.
Optionally, generating the first mask layer based on the pixel points in the enclosed area includes:
S81, generating a blank image with the same resolution as the image to be scratched, wherein the gray values of all pixel points in the blank image are set to be 0;
s82, acquiring a set Upix of coordinates of pixel points in the closed area on an image to be scratched;
S83, in the blank image, setting the gray value of the pixel point corresponding to the coordinates in the Upix to be 1, and obtaining a first mask layer.
In the blank image, a rectangular coordinate system is established in the same mode as the image to be scratched, so that the coordinates in the blank image and the coordinates in the image to be scratched can be used without coordinate conversion, and the mask layer generation efficiency is higher.
Specifically, after the blank image is generated, the pixel value of the pixel point of the area needing to be scratched is set to be 1, so that only the target area in the image to be scratched can be reserved in the subsequent image operation.
Optionally, smoothing the edge of the first mask layer to obtain a second mask layer, including:
Acquiring a set Uedg of pixel points, wherein the distance between the pixel points and the edge in the first mask layer is smaller than or equal to a preset distance;
And respectively carrying out smoothing treatment on each pixel point in the Uedg to obtain a second mask layer.
Specifically, the invention screens the pixel points at the edge of the first mask layer through the preset distance, so that the pixel points can be subjected to smoothing processing in the subsequent calculation process.
Optionally, the determining process of the pixel point corresponding to the edge of the first mask layer includes:
for a pixel u in the first mask layer, if the pixel u only includes one pixel with a gray value of 0, the pixel u is a pixel at the edge of the first mask layer.
Further, the preset distance is 3.
Optionally, smoothing is performed on each pixel point in Uedg to obtain a second mask layer, including:
for the pixel point v in Uedg, the gray value after the smoothing process is calculated by using the following function:
wherein, grayv,af and grayv,bf respectively represent the gray values of the pixel point v after the smoothing process and before the smoothing process, distv represents the distance between the pixel point v and the edge of the first mask layer, and distpreset represents the preset distance.
Specifically, in the invention, the closer the distance from the edge is, the smaller the gray value after smoothing is, so that when the second mask layer is used for carrying out operation with the image to be scratched, the edge of the obtained scratched area is smoother, and a better display effect is obtained.
Optionally, in the first mask layer, blurring and etching may be performed on the pixel points in the Uedg to obtain a second mask image.
Optionally, fusing the second mask layer with the image to be scratched to obtain a contour scratched result layer, including:
And multiplying the second mask layer with the image to be scratched to obtain a contour scratched result layer.
Specifically, in the invention, if the image to be scratched is stored in an RGB format, the gray values of the pixel points in the second mask layer are multiplied by the red component, the green component and the red component of the pixel points in the image to be scratched, and the pixel values of the components obtained after multiplication are recalculated into the RGB color space.
Embodiment two:
The invention provides an edge processing-based matting system, which comprises a coordinate acquisition module, an image edge acquisition module, a first display control module, a number acquisition module, a judgment module, a second display control module, a first generation module, a second generation module and a matting module, wherein the first display control module is used for acquiring a number of a target object;
The coordinate acquisition module is used for acquiring coordinates of positions clicked by a user on the image to be scratched and storing the coordinates into the contour coordinate set;
The image edge acquisition module is used for acquiring image edges based on coordinates clicked by a user on an image to be scratched, and numbering each image edge;
the first display control module is used for highlighting the edges of the images on the images to be scratched and displaying the numbers of each image edge;
The number acquisition module is used for acquiring a number input by a user and storing coordinates of pixel points in the image edge corresponding to the number into the contour coordinate set;
the judging module is used for judging whether the pixel points corresponding to the elements in the contour coordinate set can form a closed area on the image to be scratched;
The second display control module is used for canceling the highlighting and the display of the canceling numbers on the image to be scratched when the pixel points corresponding to the elements in the contour coordinate set cannot form a closed area on the image to be scratched, and highlighting the pixel points in the contour coordinate set;
The first generation module is used for generating a first mask layer based on pixel points in a closed area when the pixel points corresponding to elements in the contour coordinate set can enclose the closed area on the image to be scratched;
the second generation module is used for carrying out smoothing treatment on the edge of the first mask layer to obtain a second mask layer;
The matting module is used for fusing the second mask layer with the image to be matting to obtain a contour matting result layer.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

Translated fromChinese
1.一种基于边缘处理的抠图方法,其特征在于,包括:1. A method for cutting out an image based on edge processing, characterized by comprising:S1,获取用户在待抠图的图像上点击的位置的坐标,将坐标存入轮廓坐标集合;S1, obtaining the coordinates of the position where the user clicks on the image to be cut out, and storing the coordinates in a contour coordinate set;S2,基于用户在待抠图的图像上点击的坐标,获取图像边缘,并为每条图像边缘进行编号;S2, based on the coordinates where the user clicks on the image to be cut out, obtain the image edge and number each image edge;S3,在待抠图的图像上,对图像边缘进行突出显示,并显示每条图像边缘的编号;S3, highlighting the image edge on the image to be cut out and displaying the number of each image edge;S4,获取用户输入的编号,将编号对应的图像边缘中的像素点的坐标存入轮廓坐标集合;S4, obtaining the number input by the user, and storing the coordinates of the pixel points on the edge of the image corresponding to the number into the contour coordinate set;S5,判断轮廓坐标集合中的元素对应的像素点是否能够在待抠图的图像上围成一个封闭区域,若否,则进入S6,若是,则进入S7;S5, judging whether the pixel points corresponding to the elements in the contour coordinate set can enclose a closed area on the image to be cut out, if not, entering S6, if yes, entering S7;S6,取消待抠图的图像上的突出显示和取消编号的显示,突出显示轮廓坐标集合中的像素点,进入S1;S6, cancel the highlighting and numbering on the image to be cut out, highlight the pixel points in the contour coordinate set, and enter S1;S7,基于封闭区域中的像素点生成第一mask图层;S7, generating a first mask layer based on the pixels in the closed area;S8,对第一mask图层的边缘进行平滑处理,得到第二mask图层;S8, smoothing the edge of the first mask layer to obtain a second mask layer;S9,将第二mask图层与待抠图的图像进行融合,得到廓抠图结果图层;S9, fusing the second mask layer with the image to be cut out to obtain a contour cut out result layer;基于用户在待抠图的图像上点击的坐标,获取图像边缘,包括:Based on the coordinates where the user clicks on the image to be cut out, the image edge is obtained, including:用b表示用户在待抠图的图像上点击的坐标所对应的像素点;Let b represent the pixel point corresponding to the coordinates clicked by the user on the image to be cut out;采用如下方式获取图像边缘:The image edge is obtained as follows:S21,获取b的8邻域中,与b之间的灰度值的差值的绝对值小于设定的灰度值阈值的像素点,将得到的像素点存入待计算集合;S21, obtaining pixels in the 8-neighborhood of b whose absolute value of the grayscale value difference with b is less than the set grayscale value threshold, and storing the obtained pixel points in the set to be calculated;S22,分别获取待计算集合中的每个像素点对应的图像边缘;S22, respectively obtaining the image edge corresponding to each pixel point in the set to be calculated;S23,将所有得到的图像边缘作为图像边缘。S23, taking all obtained image edges as image edges.2.根据权利要求1所述的一种基于边缘处理的抠图方法,其特征在于,在S1之前,还包括:2. The image cutting method based on edge processing according to claim 1, characterized in that before S1, it also includes:获取待抠图的图像,在待抠图的图像上建立直角坐标系。Get the image to be cut out, and establish a rectangular coordinate system on the image to be cut out.3.根据权利要求2所述的一种基于边缘处理的抠图方法,其特征在于,在待抠图的图像上建立直角坐标系,包括:3. The method for cutting out an image based on edge processing according to claim 2, characterized in that a rectangular coordinate system is established on the image to be cut out, comprising:以待抠图的图像的左下角为坐标原点,以与待抠图的图像的左侧的边为X轴,以待抠图的图像的下侧的边为Y轴,建立直角坐标系。A rectangular coordinate system is established with the lower left corner of the image to be cut out as the coordinate origin, the left side of the image to be cut out as the X-axis, and the lower side of the image to be cut out as the Y-axis.4.根据权利要求1所述的一种基于边缘处理的抠图方法,其特征在于,分别获取待计算集合中的每个像素点对应的图像边缘,包括:4. The image cutting method based on edge processing according to claim 1 is characterized in that the image edge corresponding to each pixel point in the set to be calculated is obtained respectively, comprising:对于待计算集合中的像素点d,其对应的图像边缘的获取过程包括:For a pixel point d in the set to be calculated, the process of obtaining the corresponding image edge includes:S220,将像素点d作为对比像素点;S220, taking pixel d as a comparison pixel;S221,获取对比像素点的8邻域中,与对比像素点之间灰度值的差值的绝对值小于设定的灰度值阈值的像素点的集合UcmpS221, obtaining a set Ucmp of pixels in the 8-neighborhood of the comparison pixel, the absolute value of the difference in grayscale value between the comparison pixel and the comparison pixel being less than a set grayscale value threshold;S221,判断Ucmp是否为空集,若是,则将对比像素点保存到图像边缘集合,结束计算,若否,则进入S222;S221, determine whether Ucmp is an empty set, if so, save the compared pixel points to the image edge set and end the calculation, if not, enter S222;S222,将Ucmp中不属于图像边缘集合且与对比像素点之间的灰度值的差值绝对值最小的像素点作为新的对比像素点,进入S220。S222, taking the pixel in Ucmp that does not belong to the image edge set and has the smallest absolute value of grayscale difference with the comparison pixel as a new comparison pixel, and proceeding to S220.5.根据权利要求1所述的一种基于边缘处理的抠图方法,其特征在于,为每条图像边缘进行编号,包括:5. The image cutting method based on edge processing according to claim 1, characterized in that numbering each image edge comprises:从1开始,为所有的图像边缘进行连续编号,编号为整数型的数字。Starting from 1, all image edges are numbered consecutively, and the numbers are integers.6.根据权利要求1所述的一种基于边缘处理的抠图方法,其特征在于,对图像边缘进行突出显示,并显示每条图像边缘的编号,包括:6. The image cutting method based on edge processing according to claim 1 is characterized in that the image edge is highlighted and the number of each image edge is displayed, including:分别为每条图像边缘随机选择一种显示颜色,不同的图像边缘所对应的显示颜色不同;A display color is randomly selected for each image edge, and different image edges correspond to different display colors;在待抠图的图像上分别将图像边缘所对应的所有像素点的颜色修改为对应的显示颜色;On the image to be cut out, the colors of all the pixels corresponding to the edges of the image are changed to the corresponding display colors;在图像边缘的最后一个像素点的下方显示图像边缘的编号。The number of the image edge is displayed below the last pixel on the edge of the image.7.根据权利要求1所述的一种基于边缘处理的抠图方法,其特征在于,突出显示轮廓坐标集合中的像素点,包括:7. The edge processing-based cutout method according to claim 1, characterized in that highlighting the pixel points in the contour coordinate set comprises:在待抠图的图像上,将显示轮廓坐标集合中的像素点的像素点的颜色修改为为预设的颜色。On the image to be cut out, the colors of the pixels in the display contour coordinate set are modified to a preset color.8.根据权利要求1所述的一种基于边缘处理的抠图方法,其特征在于,基于封闭区域中的像素点生成第一mask图层,包括:8. The image cutting method based on edge processing according to claim 1, characterized in that generating a first mask layer based on pixels in a closed area comprises:S81,生成一张分辨率与待抠图的图像相同的空白图像,在空白图像中,所有的像素点的灰度值均被设定为0;S81, generating a blank image with the same resolution as the image to be cut out, in which the grayscale values of all pixels are set to 0;S82,获取封闭区域中的像素点在待抠图的图像上的坐标的集合UpixS82, obtaining a set Upix of coordinates of the pixel points in the closed area on the image to be matted;S83,在空白图像中,将Upix中的坐标对应的像素点的灰度值设置为1,得到第一mask图层。S83, in the blank image, setting the grayscale value of the pixel corresponding to the coordinate inUpix to 1, to obtain the first mask layer.9.一种基于边缘处理的抠图系统,其特征在于,包括坐标获取模块、图像边缘获取模块、第一显示控制模块、编号获取模块、判断模块、第二显示控制模块、第一生成模块、第二生成模块和抠图模块;9. A cutout system based on edge processing, characterized by comprising a coordinate acquisition module, an image edge acquisition module, a first display control module, a number acquisition module, a judgment module, a second display control module, a first generation module, a second generation module and a cutout module;坐标获取模块用于获取用户在待抠图的图像上点击的位置的坐标,将坐标存入轮廓坐标集合;The coordinate acquisition module is used to obtain the coordinates of the position where the user clicks on the image to be cut out, and store the coordinates in the contour coordinate set;图像边缘获取模块用于基于用户在待抠图的图像上点击的坐标,获取图像边缘,并为每条图像边缘进行编号;The image edge acquisition module is used to acquire the image edge based on the coordinates clicked by the user on the image to be cut out, and number each image edge;第一显示控制模块用于在待抠图的图像上,对图像边缘进行突出显示,并显示每条图像边缘的编号;The first display control module is used to highlight the image edge on the image to be cut out and display the number of each image edge;编号获取模块用于获取用户输入的编号,将编号对应的图像边缘中的像素点的坐标存入轮廓坐标集合;The number acquisition module is used to acquire the number input by the user, and store the coordinates of the pixel points in the image edge corresponding to the number into the contour coordinate set;判断模块用于判断轮廓坐标集合中的元素对应的像素点是否能够在待抠图的图像上围成一个封闭区域;The judging module is used to judge whether the pixel points corresponding to the elements in the contour coordinate set can enclose a closed area on the image to be cut out;第二显示控制模块用于在轮廓坐标集合中的元素对应的像素点不能够在待抠图的图像上围成一个封闭区域时,取消待抠图的图像上的突出显示和取消编号的显示,突出显示轮廓坐标集合中的像素点;The second display control module is used to cancel the highlighting and numbering on the image to be cut out, and highlight the pixel points in the outline coordinate set when the pixel points corresponding to the elements in the outline coordinate set cannot enclose a closed area on the image to be cut out;第一生成模块用于在轮廓坐标集合中的元素对应的像素点能够在待抠图的图像上围成一个封闭区域时,基于封闭区域中的像素点生成第一mask图层;The first generating module is used for generating a first mask layer based on the pixel points in the closed area when the pixel points corresponding to the elements in the contour coordinate set can enclose a closed area on the image to be cut out;第二生成模块用于对第一mask图层的边缘进行平滑处理,得到第二mask图层;The second generation module is used to smooth the edge of the first mask layer to obtain a second mask layer;抠图模块用于将第二mask图层与待抠图的图像进行融合,得到廓抠图结果图层;The cutout module is used to merge the second mask layer with the image to be cutout to obtain a contour cutout result layer;基于用户在待抠图的图像上点击的坐标,获取图像边缘,包括:Based on the coordinates where the user clicks on the image to be cut out, the image edge is obtained, including:用b表示用户在待抠图的图像上点击的坐标所对应的像素点;Let b represent the pixel point corresponding to the coordinates clicked by the user on the image to be cut out;采用如下方式获取图像边缘:The image edge is obtained as follows:S21,获取b的8邻域中,与b之间的灰度值的差值的绝对值小于设定的灰度值阈值的像素点,将得到的像素点存入待计算集合;S21, obtaining pixels in the 8-neighborhood of b whose absolute value of the grayscale value difference with b is less than the set grayscale value threshold, and storing the obtained pixel points in the set to be calculated;S22,分别获取待计算集合中的每个像素点对应的图像边缘;S22, respectively obtaining the image edge corresponding to each pixel point in the set to be calculated;S23,将所有得到的图像边缘作为图像边缘。S23, taking all obtained image edges as image edges.
CN202311712574.7A2023-12-132023-12-13 A method and system for cutting out images based on edge processingActiveCN117911444B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202311712574.7ACN117911444B (en)2023-12-132023-12-13 A method and system for cutting out images based on edge processing

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202311712574.7ACN117911444B (en)2023-12-132023-12-13 A method and system for cutting out images based on edge processing

Publications (2)

Publication NumberPublication Date
CN117911444A CN117911444A (en)2024-04-19
CN117911444Btrue CN117911444B (en)2024-11-29

Family

ID=90680919

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202311712574.7AActiveCN117911444B (en)2023-12-132023-12-13 A method and system for cutting out images based on edge processing

Country Status (1)

CountryLink
CN (1)CN117911444B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118447042B (en)*2024-07-052025-07-11杭州阿里巴巴海外互联网产业有限公司Picture processing method and system, online wearing system and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105006002A (en)*2015-08-312015-10-28北京华拓金融服务外包有限公司Automatic picture matting method and apparatus
CN110555893A (en)*2018-06-012019-12-10奥多比公司generating an enhanced digital image by selectively converting a raster image to a vector drawing portion
CN112465734A (en)*2020-10-292021-03-09星业(海南)科技有限公司Method and device for separating picture layers

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
DE102009015594B4 (en)*2009-03-302015-07-30Carl Zeiss Sms Gmbh Method and device for subpixel accurate position determination of an edge of a marker structure in a plurality of receiving pixels having recording the marker structure
CN109272526B (en)*2017-07-172021-11-02北京京东尚科信息技术有限公司Image processing method and system and electronic equipment
CN110111342B (en)*2019-04-302021-06-29贵州民族大学 A kind of optimal selection method and device for matting algorithm
CN111127404B (en)*2019-12-062023-04-18广州柏视医疗科技有限公司Medical image contour rapid extraction method
CN111161288B (en)*2019-12-262023-04-14郑州阿帕斯数云信息科技有限公司Image processing method and device
CN114898335A (en)*2022-05-162022-08-12南通顺沃供应链管理有限公司Lane line identification method and system based on Hough transform
CN115035147B (en)*2022-06-292025-04-22卡莱特云科技股份有限公司 Cutting method, device, system and image fusion method based on virtual shooting

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105006002A (en)*2015-08-312015-10-28北京华拓金融服务外包有限公司Automatic picture matting method and apparatus
CN110555893A (en)*2018-06-012019-12-10奥多比公司generating an enhanced digital image by selectively converting a raster image to a vector drawing portion
CN112465734A (en)*2020-10-292021-03-09星业(海南)科技有限公司Method and device for separating picture layers

Also Published As

Publication numberPublication date
CN117911444A (en)2024-04-19

Similar Documents

PublicationPublication DateTitle
JP3574170B2 (en) Distributed image processing device
JP2776295B2 (en) Image index generation method and image index generation device
CN113506305B (en)Image enhancement method, semantic segmentation method and device for three-dimensional point cloud data
CN114862897B (en)Image background processing method and device and electronic equipment
CN111311528B (en)Image fusion optimization method, device, equipment and medium
CN117911444B (en) A method and system for cutting out images based on edge processing
CN110136092B (en)Image processing method, device and storage medium
CN109064525A (en)Picture format conversion method, device, equipment and storage medium
CN108985132B (en)Face image processing method and device, computing equipment and storage medium
CN111161288A (en)Image processing method and device
CN111353955A (en)Image processing method, device, equipment and storage medium
CN114782645A (en)Virtual digital person making method, related equipment and readable storage medium
US12125207B2 (en)Image segmentation method and apparatus and image three-dimensional reconstruction method and apparatus
CN116958600A (en) Similarity calculation and optimization method, similarity calculation and optimization device and storage medium
CN115546027B (en)Image suture line determination method, device and storage medium
CN119359923A (en) A three-dimensional modeling method, device, terminal equipment and storage medium based on point cloud fusion
CN111080512B (en) Animation image generation method, device, electronic device and storage medium
CN113538623A (en)Method and device for determining target image, electronic equipment and storage medium
CN114841906B (en) Image synthesis method, device, electronic device and storage medium
CN112257710B (en) A method and device for detecting the tilt of a picture with text plane
CN110363723B (en) Image processing method and device for improving image boundary effect
JP3199009B2 (en) Image storage / management apparatus and image index generation method
JP6711031B2 (en) Image processing apparatus, image processing method, image processing system and program
CN110490877A (en)Binocular stereo image based on Graph Cuts is to Target Segmentation method
CN111626935B (en)Pixel map scaling method, game content generation method and device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp