Movatterモバイル変換


[0]ホーム

URL:


CN111462007B - Image processing method, device, equipment and computer storage medium - Google Patents

Image processing method, device, equipment and computer storage medium
Download PDF

Info

Publication number
CN111462007B
CN111462007BCN202010244882.1ACN202010244882ACN111462007BCN 111462007 BCN111462007 BCN 111462007BCN 202010244882 ACN202010244882 ACN 202010244882ACN 111462007 BCN111462007 BCN 111462007B
Authority
CN
China
Prior art keywords
target
image block
image
area
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010244882.1A
Other languages
Chinese (zh)
Other versions
CN111462007A (en
Inventor
庞文杰
洪智滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co LtdfiledCriticalBeijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010244882.1ApriorityCriticalpatent/CN111462007B/en
Publication of CN111462007ApublicationCriticalpatent/CN111462007A/en
Application grantedgrantedCritical
Publication of CN111462007BpublicationCriticalpatent/CN111462007B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The application discloses an image processing method, an image processing device, image processing equipment and a computer storage medium, relates to the technical field of computers, and particularly relates to the field of image processing. The specific implementation scheme is as follows: determining a target area in the first image; determining a first target image block comprising a target area in the first image according to the target area; determining a second target image block in the second image according to the first target image block; and fusing the content in the second target image block with the content in the first target image block in the target area.

Description

Image processing method, device, equipment and computer storage medium
Technical Field
The present application relates to the field of computer technology, and in particular, to the field of image processing.
Background
With the development of mobile terminals, users can take photos using portable electronic devices anytime and anywhere. The development direction of the photographing technology of the mobile terminal is also changed from the initial pixel improvement to the diversification of using modes such as image making, image matting and the like.
Today, most mobile terminal applications are more or less required to involve image acquisition and processing technology. With the enhancement of the dependence of users on applications of mobile terminals, how to seek breakthrough in image acquisition and processing is an important issue to be considered in image processing and even in optimizing and perfecting applications.
Disclosure of Invention
In order to solve at least one problem in the prior art, embodiments of the present application provide an image processing method, apparatus, device, and computer storage medium.
In a first aspect, an embodiment of the present application provides an image processing method, including:
determining a target area in the first image;
determining a first target image block comprising a target area in the first image according to the target area;
determining a second target image block in the second image according to the first target image block;
and fusing the content in the second target image block with the content in the first target image block in the target area.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
target area module: for determining a target region in the first image;
a first image object image block module: the first target image block is used for determining a first image including the target area according to the target area;
a second target image block module: the method comprises the steps of determining a second target image block in a second image according to a first target image block;
and a fusion module: for fusing the content in the second target image block with the content in the first target image block in the target area.
In a third aspect, embodiments of the present application provide an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing methods provided in any one of the embodiments of the present application.
In a fourth aspect, embodiments of the present application provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the image processing method provided in any one of the embodiments of the present application.
One embodiment of the above application has the following advantages or benefits: can provide a new development direction for the terminal image shooting technology. According to the method and the device for processing the image, the target area in the first image is determined, the first target image block is determined, the second target image block in the second image with certain correlation with the first target image block is determined, and finally, the content in the second target image block is fused to the target area in the first image, so that elements in the second image can be migrated to the first image, a rich image processing means is provided for a user, and the technical problem of single image shooting mode is solved.
Other effects of the above alternative will be described below in connection with specific embodiments.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is a schematic diagram of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an image processing method according to another embodiment of the present application;
FIG. 3 is a schematic diagram of an image processing method according to another embodiment of the present application;
FIG. 4 is a schematic diagram of an image processing method according to another embodiment of the present application;
FIG. 5 is a schematic illustration of a facial image according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an image processing method according to another embodiment of the present application;
FIG. 7 is a schematic diagram of an image processing method according to another embodiment of the present application;
fig. 8A, 8B are effect diagrams after processing a first image and a second image of one example of the present application;
fig. 9A, 9B are effect diagrams after processing a first image and a second image of one example of the present application;
FIG. 10 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 11 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
fig. 12 is a schematic view of an image processing apparatus according to another embodiment of the present application;
fig. 13 is a schematic view of an image processing apparatus according to another embodiment of the present application;
fig. 14 is a schematic view of an image processing apparatus according to another embodiment of the present application;
fig. 15 is a block diagram of an electronic device for implementing the image processing method of the embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
According to the image processing method, a first target image block is selected from a first image to be processed, then an image block closest to the first target image block is searched in a second image to be a second target image block, and then the content in the second target image block is migrated to a target area in the first target image block.
The embodiment of the application first provides an image processing method, as shown in fig. 1, including:
step 101: a target region in the first image is determined.
In the embodiment of the present application, the target area may be acquired according to preset information. For example, the first image is a facial image, the makeup regions in the facial image may include an eye makeup region, a cheek makeup region, a lip makeup region, a eyebrow makeup region, and the target region may be at least one of the makeup regions. Alternatively, the first image is a facial image including facial organs such as eyes, cheeks, lips, eyebrows, etc., and the target area is at least one of the facial organ areas.
In one embodiment of the present application, the first image is a facial image and the target area is one of the make-up areas. The application area of different ranges can be selected and determined according to the application type. For example, the makeup type may be light makeup, heavy makeup, bright makeup, beijing opera facial makeup, etc. The type of makeup may be determined based on a user selected parameter or may be determined based on the first image. For example, if the area occupied by the gorgeous color in the first image is large, the makeup type can be determined to be gorgeous. The type of makeup may also be determined from the second image. For example, the second image is mostly light or dark, and the makeup type may be determined to be light. Different makeup types, corresponding makeup ranges and target areas are different. For example, the range of the fumigated makeup corresponding to the eyes is larger, and the target area is larger; the light make-up corresponds to a smaller range of make-up on the eyes and a smaller corresponding target area.
The target region may be acquired according to the type of the first image. For example, in one embodiment, the first image is a facial image and the target area is a make-up area in the facial image. In another embodiment, the first image may be other images, such as a building image, an animal face image, a sculptured face image, etc., and the target area is a preset area where the content needs to be changed.
Step 102: and determining a first target image block comprising the target area in the first image according to the target area.
In an embodiment of the present application, the first image block may coincide with the target area.
In another embodiment of the present application, the first target image block is a circumscribed rectangle of the target area. For example, the first image is a facial image, the target area may be one of an eye make-up area, an eyebrow make-up area, a cheek make-up area, and a lip make-up area, and the target area may be an irregular pattern. In this case, the first target image block may be a rectangle circumscribing the target area pattern.
For another example, the first image is a facial image, the target area may be one of an eye area, a eyebrow area, a cheek area, and a lip area, and the first target image block may be a rectangle circumscribing the target area pattern.
Step 103: and determining a second target image block in the second image according to the first target image block.
In the embodiment of the present application, the second target image block may be an image block (Patch) in the second image, in which a relationship between the second target image block and the first target image block conforms to a set value. For example, the second target image block may be an image block in the second image having an image characteristic closest to the first target image block.
Step 104: and fusing the content in the second target image block with the content in the first target image block in the target area.
In the embodiment of the application, the content in the second target image block is fused with the content in the first target image block, and the content in the second target image block may be selectively displayed in the target area in the first target image block.
In the embodiment of the application, the content in the second target image block is fused with the content in the first target image block, and part or all of the image features in the first target image block can be changed by using part or all of the image features in the second target image block.
For the first image, where there may be multiple target areas, the operations of steps 101-104 described above may be performed on each target area one by one.
In the embodiment of the application, the second target image block can be determined in the second image according to the target area in the first image, and then the content in the second target image block is fused with the content in the first target image block in the target area, so that the technical effects of processing and adjusting the first image according to the elements in the second image can be realized. In practical application, the method can be applied to scenes and the like for making up the face of the user by referring to the pictures, and provides diversified processing modes for realizing applications such as terminal image beautification or video beautification.
In one embodiment, as shown in fig. 2, in step S104 may include:
step 201: determining a target mask layer corresponding to the target region from a plurality of preset mask layers, wherein the target mask layer comprises a first presentation proportion and a second presentation proportion;
step 202: and in the target area, the corresponding content of the first target image block is presented in a first presentation proportion, and the corresponding content of the second target image block is presented in a second presentation proportion.
In this embodiment of the present application, the first rendering ratio and the second rendering ratio may be rendering ratios for all pixels in the target area, or may be rendering ratios for some pixels in the target area.
In one embodiment, the sum of the first presentation scale and the second presentation scale is 1. For example, 30% of the first target image block content is presented and 70% of the second target image block content is presented. Therefore, the purpose of content migration can be achieved, and characteristics such as textures in the first target image block can be reserved.
In this embodiment of the present application, the plurality of preset mask layers may correspond to different target region types. For example, the first image is a facial image, and if the target region is an eye makeup region, the mask layer is a mask layer corresponding to the eye makeup region.
In the embodiment of the present invention, in the target area, the content in the first target image block is presented according to the set first presentation ratio, and the content in the second target image block is presented according to the set second presentation ratio, so that the content in the second target image block of the second image can be migrated to the target area of the first image. Thus, the first image has the content and style of the second image, and a new function is developed for video and image shooting of the current terminal.
In one embodiment, as shown in fig. 3, in step S104 may include:
step 301: determining a target mask layer corresponding to a target area from a plurality of preset mask layers, wherein the target mask layer comprises a first presentation proportion and a second presentation proportion of each pixel in the target area;
step 302: for each pixel in the target area, the corresponding content of the first target image block is presented on the pixel at a first presentation scale corresponding to the pixel, and the corresponding content of the second target image block is presented at a second presentation scale corresponding to the pixel.
In the embodiment of the application, different pixels in the target area may correspond to different rendering proportions. Therefore, when the method is applied to a scene of applying makeup to a facial image, natural transition exists between the makeup area and the non-makeup area, and a better makeup effect is achieved.
In one embodiment, as shown in fig. 4, the first image is a facial image, the target area is a target make-up area, and step S101 may include:
step 401: determining a sample make-up area in the sample facial image;
step 402: the sample makeup area is affine transformed to a face area in the first image according to the feature points of the first image to obtain a target area.
In the embodiment of the present application, a sample face is set, as shown in fig. 5, in the image of the sample face, there are a plurality of predetermined feature points 501, these feature points 501 form a plurality of triangular areas of the human face, and at least one sample make-up area, that is, an area of a broken line in the figure, is set on the sample face. Such as sample eye make-upareas 502 and 503, sample lip make-uparea 504, and sample cheek make-upareas 505 and 506. Wherein the sample lip make-uparea 504 coincides with the lip area. In the user face image, predetermined feature points are also present, so that the sample makeup area of the sample image can be affine transformed into the user face image.
Affine transformation, also called affine mapping, refers to the transformation of one vector space into another vector space by performing a linear transformation and a translation. Affine transformation is geometrically defined as an affine transformation between two vector spaces or affine mapping consisting of a non-singular linear transformation (transformation using a linear function) followed by a translational transformation. According to the method and the device for applying makeup to the face of the user, the sample makeup area is projected into the face image of the user through affine transformation, so that the makeup area can be determined in the face image of the user, and the makeup area accords with the face characteristics of the specific user.
According to the method and the device for determining the facial features of the user, the sample facial regions can be affine transformed into the facial images of the user, so that the target regions can be determined in the facial images of the user, and the facial regions which accord with the personal facial features of the user can be determined according to the actual facial images of the user. For example, some users have larger regions of eyes and corresponding regions of makeup on the eyes are correspondingly larger.
In one example, the target region may include a first target region and a second target region. Further, step S102 may include: determining a first target image block comprising a first target area in the first image according to the first target area; step S104 may include: in a second target area corresponding to the first target area, the content in the second target image block is fused with the content in the first target image block. For example: the first image is a facial image and the first target region is a facial organ region, such as an eye region (eye region or eyes). The second target region may be an circumscribed region of the facial organ, such as an circumscribed region of the eyes where makeup is desired.
In some cases, the second target area may be larger than the first target area. For example: the target make-up area is larger than the corresponding facial organ area, e.g., the eye make-up area is larger than the eye area. In other cases, the second target area may be smaller than the first target area. For example: the target make-up area is smaller than the corresponding facial organ area, e.g., the cheek make-up area is smaller than the cheek area.
In one embodiment, as shown in fig. 6, determining a second target image block in the second image from the first image block includes:
step 601: determining at least one image block to be selected in the second image;
step 602: and determining a second target image block from the image blocks to be selected according to the similarity between the image blocks to be selected and the first target image block.
In the embodiment of the application, the image block to be selected with the highest similarity with the first target image block can be determined from the image blocks to be selected and used as the second target image block.
In the embodiment of the application, the sliding windows with different specifications can be utilized to find the second target image block which has the most similar characteristics with the first target image block.
Specifically, the image block to be selected closest to the first target image block may be determined as the second target image block Based on a neuron Similarity (Neural Patch-Based Similarity).
According to the method and the device for determining the second target image block in the second image, the similarity is utilized, so that migrated second image content can be adapted to content in the first target image block, and the fused first image is more coordinated.
In one example of the present application, as shown in fig. 7, the image processing method includes:
step 701: a second image is obtained. The second image contains the user's cosmetic needs. The second image may in particular be a picture.
Step 702: and obtaining a dressing appearance diagram, namely a sample face image. The cosmetic case graphic includes at least one make-up area.
Step 703: and splitting the first image according to the dressing pattern diagram to obtain a plurality of first image blocks. Specifically, the first image may be split into a plurality of first image blocks according to the characteristics of the length, width, outer contour, etc. of the eyebrows. Specifically, facial organs (target areas) to be made up in the first image may be determined according to the makeup appearance map, and then the first image may be split according to the facial organs to be made up.
Step 704: a first target image block is determined in the first image block.
Step 705: the second target image block closest to the first target image block is found in the second image according to the VGG (Visual Geometry Group Network), visual geometry grid set.
Step 706: a mask layer corresponding to the target area is selected.
Step 707: and fusing the content of the second target image block into the target area by using the mask layer to realize migration of the makeup. Facial image segmentation techniques such as facial segmentation are utilized to find makeup areas such as periocular, lips, etc. corresponding to the face.
In one example of the present application, the second image may be a landscape as shown in fig. 8B. The first image may be a facial image, and the eye make-up effect is shown in fig. 8A.
In one example of the present application, the second image may be a portrait as shown in fig. 9B. The first image may be a facial image, and the face make-up effect is shown in fig. 9A.
The embodiment of the application also provides an image processing apparatus, as shown in fig. 10, including:
target area module 1001: for determining a target region in the first image;
first image object tile module 1002: the first target image block is used for determining a first image including the target area according to the target area;
the second target image block module 1003: the method comprises the steps of determining a second target image block in a second image according to a first target image block;
fusion module 1004: for fusing the content in the second target image block with the content in the first target image block in the target area.
In one embodiment, as shown in fig. 11, thefusion module 1004 includes:
first mask layer unit 1101: the target mask layer is used for determining a target mask layer corresponding to the target area from a plurality of preset mask layers, and comprises a first presentation proportion and a second presentation proportion;
the first presentation unit 1102: for presenting the corresponding content of the first target image block at a first presentation scale and the corresponding content of the second target image block at a second presentation scale in the target area.
In one embodiment, as shown in fig. 12, thefusion module 1004 includes:
a second mask layer unit 1201: the method comprises the steps of determining a target mask layer corresponding to a target area from a plurality of preset mask layers, wherein the target mask layer comprises a first presentation proportion and a second presentation proportion of each pixel in the target area;
a second presentation unit 1202: for each pixel in the target area, presenting the corresponding content of the first target image block at a first presentation scale corresponding to the pixel on the pixel, and presenting the corresponding content of the second target image block at a second presentation scale corresponding to the pixel.
In one embodiment, as shown in fig. 13, the first image is a facial image, the target area is a target make-up area, and thetarget area module 1001 includes:
make-up area determination module 1301: the method comprises the steps of determining a sample makeup area in a sample face image according to a target area;
affine transformation module 1302: the sample makeup area is affine transformed to a face area in the first image according to the feature points of the first image to obtain a target area.
In one embodiment, as shown in fig. 14, the second targetimage block module 1003 includes:
a to-be-selected image block unit 1501: for determining at least one image block to be selected in the second image;
a second target image block unit 1502: and determining a second target image block from the image blocks to be selected according to the similarity between the image blocks to be selected and the first target image block.
The functions of each module in each apparatus of the embodiments of the present application may be referred to the corresponding descriptions in the above methods, which are not described herein again.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 15, there is a block diagram of an electronic device according to a method of image processing according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 15, the electronic device includes: one or more processors 1601,memory 1602, and interfaces for connecting the components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 1601 is shown in fig. 15 as an example.
Memory 1602 is a non-transitory computer-readable storage medium provided herein. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the methods of image processing provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of image processing provided herein.
Thememory 1602 is a non-transitory computer readable storage medium that can be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules (e.g., thetarget region module 1001, the first targetimage block module 1002, the second targetimage block module 1003, and thefusion module 1004 shown in fig. 10) corresponding to the method of image processing in the embodiments of the present application. The processor 1601 executes various functional applications of the server and data processing, i.e., a method of implementing image processing in the above-described method embodiment, by executing non-transitory software programs, instructions, and modules stored in thememory 1602.
Memory 1602 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the electronic device for image processing, or the like. In addition,memory 1602 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments,memory 1602 may optionally include memory located remotely from processor 1601, which may be connected to the image processing electronics by a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of image processing may further include: aninput device 1603 and anoutput device 1604. The processor 1601,memory 1602,input device 1603, andoutput device 1604 may be connected by a bus or otherwise, for example in fig. 15.
Theinput device 1603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the image processing electronic device, such as a touch screen, keypad, mouse, trackpad, touchpad, pointer stick, one or more mouse buttons, trackball, joystick, and like input devices. Theoutput devices 1604 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibration motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the target area in the first image is determined, the first target image block is determined, the second target image block in the second image with certain correlation with the first target image block is determined, and finally the content in the second target image block is fused to the target area in the first image, so that the technical problem that the image shooting mode seeks diversified roads is solved, and the technical effect of enriching the image shooting means is achieved. It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (10)

Translated fromChinese
1.一种图像处理方法,其特征在于,包括:1. An image processing method, characterized in that, comprising:确定第一图像中的目标区域;determining a region of interest in the first image;根据所述目标区域,确定所述第一图像中包括所述目标区域的第一目标图像块;determining a first target image block including the target area in the first image according to the target area;根据所述第一目标图像块,确定第二图像中的第二目标图像块;determining a second target image block in the second image based on the first target image block;在所述目标区域,将所述第二目标图像块中的内容与所述第一目标图像块中的内容融合;In the target area, merging the content in the second target image block with the content in the first target image block;所述第一图像为脸部图像,所述目标区域为目标上妆区域,确定第一图像中的目标区域,包括:The first image is a face image, and the target area is a target makeup area, and determining the target area in the first image includes:确定样本脸部图像中的样本上妆区域;Determining a sample makeup area in the sample face image;根据所述第一图像的特征点,将所述样本上妆区域仿射变换到所述第一图像中的脸部区域,以获得所述目标区域;Affine transforming the sample makeup area to the face area in the first image according to the feature points of the first image, so as to obtain the target area;所述目标区域包括第一目标区域和第二目标区域,其中所述第一目标区域为脸部器官区域,所述第二目标区域为脸部器官的外切区域。The target area includes a first target area and a second target area, wherein the first target area is a facial organ area, and the second target area is a circumscribed area of the facial organ.2.根据权利要求1所述的方法,其特征在于,在所述目标区域,将所述第二目标图像块中的内容与所述第一目标图像块中的内容融合,包括:2. The method according to claim 1, wherein, in the target area, merging the content in the second target image block with the content in the first target image block comprises:从多个预设的遮罩层中确定出与所述目标区域对应的目标遮罩层,所述目标遮罩层包括第一呈现比例和第二呈现比例;determining a target mask layer corresponding to the target area from a plurality of preset mask layers, where the target mask layer includes a first presentation scale and a second presentation scale;在所述目标区域,以所述第一呈现比例呈现所述第一目标图像块的对应内容,以所述第二呈现比例呈现所述第二目标图像块的对应内容。In the target area, the corresponding content of the first target image block is presented at the first presentation scale, and the corresponding content of the second target image block is presented at the second presentation scale.3.根据权利要求1所述的方法,其特征在于,在所述目标区域,将所述第二目标图像块中的内容与所述第一目标图像块中的内容融合,包括:3. The method according to claim 1, wherein, in the target area, merging the content in the second target image block with the content in the first target image block comprises:从多个预设的遮罩层中确定出与所述目标区域对应的目标遮罩层,所述目标遮罩层包括所述目标区域中各像素的第一呈现比例和第二呈现比例;determining a target mask layer corresponding to the target area from a plurality of preset mask layers, the target mask layer including a first rendering ratio and a second rendering ratio of each pixel in the target area;对于所述目标区域中的每个像素,在所述像素上以所述像素对应的第一呈现比例呈现所述第一目标图像块的对应内容,以所述像素对应的第二呈现比例呈现所述第二目标图像块的对应内容。For each pixel in the target area, present the corresponding content of the first target image block on the pixel with the first presentation scale corresponding to the pixel, and present the corresponding content of the first target image block with the second presentation scale corresponding to the pixel Describe the corresponding content of the second target image block.4.根据权利要求1所述的方法,其特征在于,根据所述第一图像块,确定第二图像中的第二目标图像块,包括:4. The method according to claim 1, wherein, according to the first image block, determining the second target image block in the second image comprises:确定所述第二图像中的至少一个待选图像块;determining at least one candidate image block in the second image;根据所述待选图像块与所述第一目标图像块的相似度,从各所述待选图像块中确定出所述第二目标图像块。The second target image block is determined from each candidate image block according to the similarity between the candidate image block and the first target image block.5.一种图像处理装置,其特征在于,包括:5. An image processing device, characterized in that, comprising:目标区域模块:用于确定第一图像中的目标区域;Target area module: used to determine the target area in the first image;第一图目标像块模块:用于根据所述目标区域,确定所述第一图像中包括所述目标区域的第一目标图像块;The first image target image block module: used to determine the first target image block including the target area in the first image according to the target area;第二目标图像块模块:用于根据所述第一目标图像块,确定第二图像中的第二目标图像块;The second target image block module: used to determine the second target image block in the second image according to the first target image block;融合模块:用于在所述目标区域,将所述第二目标图像块中的内容与所述第一目标图像块中的内容融合;A fusion module: used to fuse the content in the second target image block with the content in the first target image block in the target area;所述第一图像为脸部图像,所述目标区域为目标上妆区域,所述目标区域模块包括:The first image is a face image, the target area is a target makeup area, and the target area module includes:上妆区域确定模块:用于确定样本脸部图像中的样本上妆区域;Makeup area determination module: used to determine the sample makeup area in the sample face image;仿射变换模块:根据所述第一图像的特征点,将所述样本上妆区域仿射变换到所述第一图像中的脸部区域,以获得所述目标区域;Affine transformation module: according to the feature points of the first image, affine transform the makeup area of the sample to the face area in the first image to obtain the target area;所述目标区域包括第一目标区域和第二目标区域,其中所述第一目标区域为脸部器官区域,所述第二目标区域为脸部器官的外切区域。The target area includes a first target area and a second target area, wherein the first target area is a facial organ area, and the second target area is a circumscribed area of the facial organ.6.根据权利要求5所述的装置,其特征在于,所述融合模块包括:6. The device according to claim 5, wherein the fusion module comprises:第一遮罩层单元:用于从多个预设的遮罩层中确定出与所述目标区域对应的目标遮罩层,所述目标遮罩层包括第一呈现比例和第二呈现比例;The first mask layer unit: used to determine a target mask layer corresponding to the target area from a plurality of preset mask layers, the target mask layer including a first rendering ratio and a second rendering ratio;第一呈现单元:用于在所述目标区域,以所述第一呈现比例呈现所述第一目标图像块的对应内容,以所述第二呈现比例呈现所述第二目标图像块的对应内容。A first presentation unit: for presenting the corresponding content of the first target image block in the first presentation scale in the target area, and presenting the corresponding content of the second target image block in the second presentation scale .7.根据权利要求5所述的装置,其特征在于,所述融合模块包括:7. The device according to claim 5, wherein the fusion module comprises:第二遮罩层单元:用于从多个预设的遮罩层中确定出与所述目标区域对应的目标遮罩层,所述目标遮罩层包括所述目标区域中各像素的第一呈现比例和第二呈现比例;The second mask layer unit: used to determine a target mask layer corresponding to the target area from a plurality of preset mask layers, the target mask layer including the first pixel of each pixel in the target area a rendering scale and a second rendering scale;第二呈现单元:用于对于所述目标区域中的每个像素,在所述像素上以所述像素对应的第一呈现比例呈现所述第一目标图像块的对应内容,以所述像素对应的第二呈现比例呈现所述第二目标图像块的对应内容。The second presentation unit: for each pixel in the target area, present the corresponding content of the first target image block on the pixel at the first presentation scale corresponding to the pixel, and the pixel corresponds to The second presentation ratio presents the corresponding content of the second target image block.8.根据权利要求5所述的装置,其特征在于,所述第二目标图像块模块包括:8. The device according to claim 5, wherein the second target image block module comprises:待选图像块单元:用于确定所述第二图像中的至少一个待选图像块;A candidate image block unit: used to determine at least one candidate image block in the second image;第二目标图像块单元:用于根据所述待选图像块与所述第一目标图像块的相似度,从各所述待选图像块中确定出所述第二目标图像块。The second target image block unit: for determining the second target image block from each candidate image block according to the similarity between the candidate image block and the first target image block.9.一种电子设备,其特征在于,包括:9. An electronic device, characterized in that it comprises:至少一个处理器;以及at least one processor; and与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-4中任一项所述的方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can perform any one of claims 1-4. Methods.10.一种存储有计算机指令的非瞬时计算机可读存储介质,其特征在于,所述计算机指令用于使所述计算机执行权利要求1-4中任一项所述的方法。10. A non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to make the computer execute the method according to any one of claims 1-4.
CN202010244882.1A2020-03-312020-03-31 Image processing method, device, equipment and computer storage mediumActiveCN111462007B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010244882.1ACN111462007B (en)2020-03-312020-03-31 Image processing method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010244882.1ACN111462007B (en)2020-03-312020-03-31 Image processing method, device, equipment and computer storage medium

Publications (2)

Publication NumberPublication Date
CN111462007A CN111462007A (en)2020-07-28
CN111462007Btrue CN111462007B (en)2023-06-09

Family

ID=71680187

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010244882.1AActiveCN111462007B (en)2020-03-312020-03-31 Image processing method, device, equipment and computer storage medium

Country Status (1)

CountryLink
CN (1)CN111462007B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112348755B (en)*2020-10-302024-12-17咪咕文化科技有限公司Image content restoration method, electronic device and storage medium
CN114119423B (en)*2021-12-082025-08-26上海肇观电子科技有限公司 Image processing method, device, electronic device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101779218A (en)*2007-08-102010-07-14株式会社资生堂Makeup simulation system, makeup simulation device, makeup simulation method, and makeup simulation program
CN103870821A (en)*2014-04-102014-06-18上海影火智能科技有限公司Virtual make-up trial method and system
WO2018188534A1 (en)*2017-04-142018-10-18深圳市商汤科技有限公司Face image processing method and device, and electronic device

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102708575A (en)*2012-05-172012-10-03彭强Daily makeup design method and system based on face feature region recognition
CN103236066A (en)*2013-05-102013-08-07苏州华漫信息服务有限公司Virtual trial make-up method based on human face feature analysis
CN104899825B (en)*2014-03-062019-07-05腾讯科技(深圳)有限公司A kind of method and apparatus of pair of picture character moulding
US9501689B2 (en)*2014-03-132016-11-22Panasonic Intellectual Property Management Co., Ltd.Image processing apparatus and image processing method
CN108292413A (en)*2015-12-282018-07-17松下知识产权经营株式会社Makeup simulation auxiliary device, makeup simulation householder method and makeup simulation auxiliary program
JP6876941B2 (en)*2016-10-142021-05-26パナソニックIpマネジメント株式会社 Virtual make-up device, virtual make-up method and virtual make-up program
CN106952221B (en)*2017-03-152019-12-31中山大学 A three-dimensional Beijing opera facial makeup automatic makeup method
CN107123083B (en)*2017-05-022019-08-27中国科学技术大学 Face Editing Method
CN108257084B (en)*2018-02-122021-08-24北京中视广信科技有限公司Lightweight face automatic makeup method based on mobile terminal
CN110136054B (en)*2019-05-172024-01-09北京字节跳动网络技术有限公司Image processing method and device
CN110390632B (en)*2019-07-222023-06-09北京七鑫易维信息技术有限公司Image processing method and device based on dressing template, storage medium and terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101779218A (en)*2007-08-102010-07-14株式会社资生堂Makeup simulation system, makeup simulation device, makeup simulation method, and makeup simulation program
CN103870821A (en)*2014-04-102014-06-18上海影火智能科技有限公司Virtual make-up trial method and system
WO2018188534A1 (en)*2017-04-142018-10-18深圳市商汤科技有限公司Face image processing method and device, and electronic device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种基于样例图片的数字人脸化妆技术;甄蓓蓓;《中国优秀硕士学位论文全文数据库 信息科技辑》,一种基于样例图片的数字人脸化妆技术;正文第1-41页*
一种多通路的分区域快速妆容迁移深度网络;黄妍,何泽文,张文生;《软件学报》,一种多通路的分区域快速妆容迁移深度网络;正文第3549-3566*
基于基于图像处理的实时虚拟化妆及推荐方法研究图像处理的实时虚拟化妆及推荐方法研究;李杰;《中国优秀硕士学位论文全文数据库 信息科技辑》,基于图像处理的实时虚拟化妆及推荐方法研究;正文第1-60页*

Also Published As

Publication numberPublication date
CN111462007A (en)2020-07-28

Similar Documents

PublicationPublication DateTitle
CN111652828B (en)Face image generation method, device, equipment and medium
JP7135125B2 (en) Near-infrared image generation method, near-infrared image generation device, generation network training method, generation network training device, electronic device, storage medium, and computer program
CN108537859B (en)Image mask using deep learning
US11024060B1 (en)Generating neutral-pose transformations of self-portrait images
CN111768356B (en)Face image fusion method and device, electronic equipment and storage medium
CN111563855B (en)Image processing method and device
CN112328345B (en)Method, apparatus, electronic device and readable storage medium for determining theme colors
WO2021169307A1 (en)Makeup try-on processing method and apparatus for face image, computer device, and storage medium
CN111489311A (en)Face beautifying method and device, electronic equipment and storage medium
CN114066715B (en) Image style transfer method, device, electronic device and storage medium
CN111259183B (en) Image recognition method, device, electronic equipment and medium
CN112102462A (en)Image rendering method and device
JP7635372B2 (en) Image processing method, apparatus, device and computer program
CN111583379A (en)Rendering method and device of virtual model, storage medium and electronic equipment
CN111462007B (en) Image processing method, device, equipment and computer storage medium
CN110502205A (en)Picture showing edge processing method, device, electronic equipment and readable storage medium storing program for executing
CN113822965A (en)Image rendering processing method, device and equipment and computer storage medium
JP7160495B2 (en) Image preprocessing method, device, electronic device and storage medium
US20210279928A1 (en)Method and apparatus for image processing
CN117252777A (en)Image processing method, device and equipment
CN112083863A (en) Image processing method, apparatus, electronic device and readable storage medium
Syahputra et al.Finger recognition as interaction media in Augmented Reality for historical buildings in Matsum and Kesawan regions of Medan City
CN119006953A (en)Model training method, face generation method and device
CN116977539A (en)Image processing method, apparatus, computer device, storage medium, and program product
JP2023542598A (en) Character display methods, devices, electronic devices, and storage media

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp