Movatterモバイル変換


[0]ホーム

URL:


CN114067090A - Target object map obtaining method and device, storage medium and electronic equipment - Google Patents

Target object map obtaining method and device, storage medium and electronic equipment
Download PDF

Info

Publication number
CN114067090A
CN114067090ACN202111144643.XACN202111144643ACN114067090ACN 114067090 ACN114067090 ACN 114067090ACN 202111144643 ACN202111144643 ACN 202111144643ACN 114067090 ACN114067090 ACN 114067090A
Authority
CN
China
Prior art keywords
target object
model
map
dimensional
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111144643.XA
Other languages
Chinese (zh)
Other versions
CN114067090B (en
Inventor
朱辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba Cloud Computing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Cloud Computing LtdfiledCriticalAlibaba Cloud Computing Ltd
Priority to CN202111144643.XApriorityCriticalpatent/CN114067090B/en
Publication of CN114067090ApublicationCriticalpatent/CN114067090A/en
Application grantedgrantedCritical
Publication of CN114067090BpublicationCriticalpatent/CN114067090B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a method and a device for acquiring a target object map, a storage medium and electronic equipment. Wherein, the method comprises the following steps: acquiring target object posture information in a target object image; performing attitude adjustment on a first model based on the attitude information of the target object to obtain a material graph corresponding to the target object image, wherein the first model is a three-dimensional target object model in an initial attitude; and mapping the material map to obtain a target object map. The invention solves the technical problems of two-dimensional material stretching and material drifting caused by inaccurate matching of map point positions in a target object map processing method in the related art.

Description

Target object map obtaining method and device, storage medium and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for acquiring a target object map, a storage medium and electronic equipment.
Background
At present, in the related art, the face mapping effect in intelligent makeup products such as intelligent makeup APP is realized based on two-dimensional face materials and key points, and actually, because a real face is three-dimensional, the situations of inaccurate mapping point position, stretching, drifting and the like caused by different angles of the face exist, and the product performance and the user experience feeling of the intelligent makeup products are influenced.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for acquiring a target object map, a storage medium and electronic equipment, which are used for at least solving the technical problems of two-dimensional material stretching and material drifting caused by inaccurate matching of map point positions in a target object map processing method in the related technology.
According to an aspect of the embodiments of the present invention, there is provided a method for obtaining a target object map, including: acquiring target object posture information in a target object image; performing attitude adjustment on a first model based on the attitude information of the target object to obtain a material graph corresponding to the target object image, wherein the first model is a three-dimensional target object model in an initial attitude; and mapping the material map to obtain a target object map.
According to another aspect of the embodiments of the present invention, there is also provided a method for obtaining a target object map, including: receiving a target object image from a client; acquiring target object posture information in the target object image, performing posture adjustment on a first model based on the target object posture information to obtain a material map corresponding to the target object image, and performing mapping processing on the material map to obtain a target object map, wherein the first model is a three-dimensional target object model in an initial posture; and sending the target object map to the client.
According to another aspect of the embodiments of the present invention, there is also provided an apparatus for obtaining a target object map, including: the acquisition module is used for acquiring target object posture information in the target object image; the adjusting module is used for carrying out posture adjustment on a first model based on the posture information of the target object to obtain a material map corresponding to the target object image, wherein the first model is a three-dimensional target object model in an initial posture; and the processing module is used for carrying out mapping processing on the material map to obtain a target object mapping.
According to another aspect of the embodiments of the present invention, a computer-readable storage medium is further provided, where the computer-readable storage medium includes a stored program, and when the program runs, the apparatus where the computer-readable storage medium is located is controlled to execute any one of the above methods for obtaining a target object map.
According to another aspect of the embodiments of the present invention, there is further provided a processor, where the processor is configured to execute a program, where the program executes any one of the above methods for obtaining a target object map.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including: a processor; and a memory, connected to the processor, for providing instructions to the processor for processing the following processing steps: acquiring target object posture information in a target object image; performing attitude adjustment on a first model based on the attitude information of the target object to obtain a material graph corresponding to the target object image, wherein the first model is a three-dimensional target object model in an initial attitude; and mapping the material map to obtain a target object map.
In the embodiment of the invention, target object posture information in a target object image is acquired; performing attitude adjustment on a first model based on the attitude information of the target object to obtain a material graph corresponding to the target object image, wherein the first model is a three-dimensional target object model in an initial attitude; and mapping the material map to obtain a target object map.
It is readily noted that the embodiment of the present invention employs a standard three-dimensional target object model, and various makeup materials matched with the standard three-dimensional target object model. In practical application, the posture of the standard three-dimensional target object model is adjusted according to the target object posture information in the obtained target object image, the adjusted three-dimensional target object model is projected onto a two-dimensional plane to obtain a two-dimensional material map, the material point locations and the target object key point locations in the two-dimensional material map can be matched more efficiently and accurately, and the target object map is obtained by performing mapping processing on the two-dimensional material map.
Therefore, the method and the device achieve the purpose of efficiently and accurately matching the material point locations and the target object key point locations in the two-dimensional material map, improve the matching accuracy of the map point locations, and avoid the problems of stretching of the two-dimensional material and drifting of the material, thereby realizing the technical effects of improving the product performance and the user experience of the intelligent makeup product, and further solving the technical problems of stretching of the two-dimensional material and drifting of the material caused by inaccurate matching of the map point locations in the target object map processing method in the related technology.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 shows a block diagram of a hardware structure of a computer terminal (or mobile device) for implementing a method for obtaining a target object map;
FIG. 2 is a flowchart of a method for obtaining a target object map according to an embodiment of the present invention;
FIG. 3 is a flowchart of an alternative target object map obtaining method according to an embodiment of the present invention;
FIG. 4 is a schematic view of an alternative scene for obtaining a face map according to an embodiment of the present invention;
FIG. 5 is a schematic view of an alternative scenario for obtaining a face map according to an embodiment of the present invention;
FIG. 6 is a flowchart of another method for obtaining a target object map according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an apparatus for obtaining a target object map according to an embodiment of the present invention;
fig. 8 is a block diagram of a computer terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
two-dimensional target object key points: the two-dimensional face key points are two-dimensional face key points, and the two-dimensional face key points can include: contour points such as a face frame, left and right eyebrows, left and right eye sockets, lips and the like.
Target object mapping: various material diagrams are pasted on the target object picture by adopting an image algorithm, so that the effects of beautifying, making up, pasting paper and the like are achieved.
In the related art, there are mainly a scheme of face mapping based on two-dimensional face key points and two-dimensional materials and a scheme of face mapping based on a three-dimensional face alignment model and three-dimensional materials, in which:
1) the scheme of the face mapping based on the two-dimensional face key points and the two-dimensional materials is to directly paste the two-dimensional materials on the face according to the two-dimensional face key points. Two-dimensional face materials are generally made based on a front face, so that the effect is best when the face is front. For the side face or the head-up and head-down conditions, the point positions on the material are not matched with the detected key point positions, so that the mapping effect has obvious stretching and drifting problems, and the use experience is influenced.
However, the face mapping effect of the scheme is not good, and the stretching and drifting problems exist because the influence of the face pose on key point positions is not considered. The material map is a front face two-dimensional face material generally, and when the actual face is not a front face such as a side face, the difference between the relative positions of the left and right contour points of the face and the contour points of the nose and mouth of the eyes is larger than that of the front face. When the front face material mapping is adopted, the problems of stretching and drifting caused by inaccurate point positions are difficult to avoid.
2) A human face mapping scheme based on a three-dimensional human face alignment model and three-dimensional materials is characterized in that a three-dimensional human face grid is directly output by the three-dimensional human face alignment model, the three-dimensional materials are mapped on the three-dimensional human face grid, and finally a two-dimensional picture is rendered according to the posture of a human face. According to the scheme, the point locations of the three-dimensional face grids correspond to the point locations of the three-dimensional materials one to one, and the problems of stretching and drifting caused by the fact that the map location does not correspond are solved well.
However, because of the need to base on a three-dimensional face alignment model, the computation load of the model is generally significantly higher than that of a two-dimensional face alignment model. For a user, on equipment with limited computing resources such as a mobile terminal, the running speed and the power consumption of the two-dimensional face alignment model are obviously inferior to those of a two-dimensional face alignment model. For developers, the three-dimensional face alignment model has high requirements on a data set, mainly depends on the three-dimensional face alignment model, has high development difficulty and high requirements on equipment performance, has a relatively narrow application range, and is difficult to cover common application scenes; the development and maintenance costs are also significantly higher than for two-dimensional face alignment models.
In view of the above reasons, in the current market, a two-dimensional face alignment model is still the mainstream, so the following technical scheme of how to implement a three-dimensional face mapping effect based on two-dimensional face key points is schematically illustrated by way of example, but the scheme of the present application is not limited to be applied to the two-dimensional face alignment model, and can also be applied to the three-dimensional face alignment model to implement the three-dimensional face mapping effect based on three-dimensional face key points.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for obtaining a target object map, where the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer executable instructions, and where a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than that illustrated herein.
The method provided by the embodiment 1 of the present application can be executed in a mobile terminal, a computer terminal or a similar computing device. Fig. 1 shows a hardware configuration block diagram of a computer terminal (or mobile device) for implementing the target object map acquisition method. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), amemory 104 for storing data, and a transmission module 106 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is merely illustrative and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
Thememory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the target object map obtaining method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in thememory 104, so as to achieve the above-mentioned target object map obtaining. Thememory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, thememory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
In the foregoing operating environment, the present invention provides a method for obtaining a target object map as shown in fig. 2, where fig. 2 is a flowchart of a method for obtaining a target object map according to an embodiment of the present invention, and as shown in fig. 2, the method for obtaining a target object map includes:
step S202, acquiring target object posture information in a target object image;
step S204, carrying out posture adjustment on the first model based on the posture information of the target object to obtain a material graph corresponding to the target object image;
and step S206, mapping the material map to obtain a target object map.
Optionally, the target object may be flexibly set according to an actual application scenario. The target object can be applied to human life scenes and animal life scenes. In a human life scenario, the target object may be a specific part of the human being itself, such as: target objects, also particular ornaments to be worn by humans, such as: and sticking a hair band. In the life scene of the animal, the target object may be a specific part of the animal itself, such as: cat faces, also animal-worn specific ornaments such as: a collar.
Optionally, the target object image is obtained by capturing a target object through a smart terminal (e.g., a smart phone, a smart robot, a smart watch, a camera, an iPAD, etc.), and includes an image of the target object, for example, an image including a human face obtained by capturing a person, where the image type of the target image may be a color image, a black-and-white image, a photo image, a video image, etc.
Optionally, the target object posture information includes but is not limited to: posture information such as side face, head shaking, eye closing, head raising, head lowering and the like; the first model is a three-dimensional target object model in an initial posture, and the three-dimensional target object model in the initial posture is a standard three-dimensional target object model.
Alternatively, the material map may be a two-dimensional material map, for example, a two-dimensional planar material map, for example, any type of material map, such as lovely cartoon whiskers, pretty cartoon eyes, and beep cartoon mouths, characters and pictures.
Optionally, the target object mapping is to apply various material maps to the target object image by using an image algorithm, so as to achieve mapping effects such as beautifying, makeup, pasting paper, decoration and the like.
It should be noted that the execution main body of the method for obtaining a target object map provided in the embodiment of the present application may be a Saas client, and the method applies the target object pose information in the actual target object image to the standard three-dimensional target object model by using the standard three-dimensional target object model to generate a two-dimensional planar material map, and then performs mapping processing to obtain the target object map, that is, the method obtains the corresponding two-dimensional planar material map according to the actual target object pose information processing, then performs mapping processing to obtain the target object map, instead of ignoring the actual situation, and directly performs mapping by using the two-dimensional material map of the front face of the person in the target object image, thereby avoiding the inaccuracy of the material point locations in the material map and the target object key point locations in the target object image due to the difference of the target object pose information, and further resulting in material stretching, The technical problem of drift.
Namely, the embodiment of the invention adopts a standard three-dimensional target object model and various makeup materials matched with the standard three-dimensional target object model. In practical application, the posture of the standard three-dimensional target object model is adjusted according to the target object posture information in the obtained target object image, the adjusted three-dimensional target object model is projected onto a two-dimensional plane to obtain a two-dimensional material map, the material point locations and the target object key point locations in the two-dimensional material map can be matched more efficiently and accurately, and the target object map is obtained by performing mapping processing on the two-dimensional material map.
Therefore, the method and the device achieve the purpose of efficiently and accurately matching the material point locations and the target object key point locations in the two-dimensional material map, improve the matching accuracy of the map point locations, and avoid the problems of stretching of the two-dimensional material and drifting of the material, thereby realizing the technical effects of improving the product performance and the user experience of the intelligent makeup product, and further solving the technical problems of stretching of the two-dimensional material and drifting of the material caused by inaccurate matching of the map point locations in the target object map processing method in the related technology.
As an alternative embodiment, as shown in fig. 3, acquiring the target object posture information in the target object image includes:
step S302, detecting a plurality of first key points from the target object image by using a third model;
step S304, performing attitude estimation by using the plurality of first key points to obtain the attitude information of the target object.
Optionally, the third model is a two-dimensional target object alignment model, and the plurality of first key points are target object key points in the target object image, for example, two-dimensional target object key points, that is, target object key points on the two-dimensional image. It should be noted that the number of the two-dimensional target object key points is generally tens to hundreds, the coordinates of each target object key point are formed by two dimensions of x and y, and each point represents a specific position on the target object, such as a face frame, left and right eyebrows, left and right eye sockets, lips, and other contour points. In the scheme of the application, the attitude estimation can be performed by using, but not limited to, 280 two-dimensional target object key points to obtain the target object attitude information.
As an alternative embodiment, as also shown in fig. 3, performing pose adjustment on the first model based on the pose information of the target object to obtain the material map includes:
step S402, carrying out attitude adjustment on the first model based on the attitude information of the target object to obtain a second model, wherein the second model is a three-dimensional target object model of a target attitude;
step S404, determining a two-dimensional material map corresponding to the target object image by using the second model.
Alternatively, the second model is a three-dimensional target object model in a target posture, for example, a three-dimensional target object model in a head-up posture, a three-dimensional target object model in a head-down posture, and the like.
In the above optional embodiment, in the actual application scenario, the target object posture information in the target object image is changed in real time, so that in an actual situation, the current target object posture information is adopted to perform posture adjustment on the first model to obtain a three-dimensional target object model of the target posture, that is, a three-dimensional target object model of the final posture (that is, an adjusted three-dimensional model), and the two-dimensional material map corresponding to the target object image is determined according to the three-dimensional target object model of the final posture.
Taking the target object image as a face image, taking the target object pose information as face pose information as an example, under the condition that the face in the face image rotates by a larger angle, the silkworm sleeping effect completely based on two-dimensional face key points can generate the phenomenon of obvious drift stretching as shown in fig. 4, without considering the face pose information, because the problem that material point positions in a two-dimensional material map and face key point positions in the face image are not accurate can occur when the standard pixel material pointed by an arrow shown in fig. 4 is adopted for face mapping, but the technical scheme of the three-dimensional face mapping provided by the embodiment of the application can realize the face mapping effect as shown in fig. 5, because the face pose information is applied to a standard three-dimensional face model, projected to a two-dimensional plane to obtain an adjusted two-dimensional material map, and pointed by the arrow shown in fig. 5, the adjusted two-dimensional material map is subjected to face mapping, so that the technical problems of material stretching and drifting caused by inaccurate material point location in the two-dimensional material map and human face key point location in the human face image due to different human face posture information are effectively solved.
As an alternative embodiment, performing pose adjustment on the first model based on the pose information of the target object to obtain the second model includes:
step S502, a plurality of second key points are obtained from the first model, wherein the second key points are the key points of the target object in the three-dimensional target object model of the initial posture;
step S504 is performed to perform pose adjustment on the three-dimensional coordinates of the plurality of second key points based on the pose information of the target object, so as to obtain the second model.
Optionally, the plurality of second keypoints are target object keypoints in the three-dimensional target object model in the initial pose, for example, two-dimensional target object keypoints, coordinates of each target object keypoint are formed by two dimensions, x and y, and each point represents a specific position on the target object, such as contour points of a face frame, left and right eyebrows, left and right eye sockets, and lips. In the embodiment of the present application, the second model, that is, the three-dimensional target object model in the target posture, is obtained by performing posture adjustment on the three-dimensional coordinates of the plurality of second key points according to the target object posture information in the actual target object image.
As an alternative embodiment, the determining the two-dimensional material map corresponding to the target object image by using the second model includes:
step S602, projecting the second model to a preset two-dimensional plane to obtain the two-dimensional material map.
In the above optional embodiment, the two-dimensional material map and the corresponding key points of the standard target object may be obtained by projecting the second model onto a preset two-dimensional plane, and the two-dimensional material may be accurately pasted onto the image according to the key points of the two-dimensional target object output by the target object alignment model (the key points are in one-to-one correspondence with the key points of the standard target object), so as to obtain the target object map.
As an alternative embodiment, as shown in fig. 3, the step of performing mapping processing on the material map to obtain the target object map includes:
step S702, acquiring a plurality of third key points from the material map, wherein the third key points are key points of a target object in the material map;
step S704, using the correspondence between the plurality of first key points and the plurality of third key points, attaches the material map to the target object image, and obtains the target object map.
The scheme of the application adopts a standard target object three-dimensional model material and various makeup material graphs matched with the standard target object model. In actual application, the posture of the standard target object model is adjusted according to target object posture information output by the target object alignment model, and the adjusted three-dimensional model is projected onto a preset two-dimensional plane to obtain a two-dimensional material map and a plurality of corresponding key points of the standard target object; and acquiring a plurality of two-dimensional target object key points from the material map through a target object alignment model, and attaching the material map to the target object image by using the corresponding relation between the two-dimensional target object key points and the two-dimensional target object key points to obtain the target object map. According to the scheme, the problems of material stretching and drifting caused by inaccurate point positions can be effectively avoided when standard two-dimensional target object materials are subjected to mapping.
As another alternative embodiment, the target object map obtaining method provided by the present application scheme also uses standard target object three-dimensional model materials and various makeup materials matched with the standard target object model based on the three-dimensional target object mesh and the target object map of the three-dimensional materials. The difference is that the target object map obtaining scheme depends on a three-dimensional target object alignment model, three-dimensional target object grids or three-dimensional target objects are output, three-dimensional target object materials are directly pasted on the three-dimensional target objects, and then the three-dimensional target objects are projected to a preset two-dimensional plane to be rendered. However, it should be noted that, in general, the development difficulty and the calculation amount of the three-dimensional target object alignment model are significantly higher than those of the two-dimensional target object alignment model, and particularly at the mobile end, the three-dimensional target object alignment model is not yet popularized, and the application of the two-dimensional target object alignment model is wider.
The present application further provides a method for obtaining a target object map as shown in fig. 6, where fig. 6 is a flowchart of another method for obtaining a target object map according to an embodiment of the present invention, and as shown in fig. 6, the method for obtaining a target object map includes:
step S802, receiving a target object image from a client;
step S804, obtaining pose information of a target object in the target object image, performing pose adjustment on a first model based on the pose information of the target object to obtain a material map corresponding to the target object image, and performing mapping processing on the material map to obtain a target object map, wherein the first model is a three-dimensional target object model in an initial pose;
step S806, sending the target object map to the client.
It should be noted that the execution main body of the method for obtaining a target object map provided in the embodiment of the present application may be a Saas server, and the method includes receiving a target object image from a client, using a standard three-dimensional target object model, applying target object pose information in an actual target object image to the standard three-dimensional target object model, generating a two-dimensional planar material map, performing mapping processing to obtain a target object map, and sending the target object map to the Saas client, so as to achieve the purpose that a corresponding two-dimensional planar material map is obtained according to the actual target object pose information processing, then performing mapping processing to obtain the target object map, instead of neglecting the actual situation, directly using a two-dimensional material map of a front face of a person in the target object image to perform mapping, thereby avoiding that material point locations in the material map and target object key point locations in the target object image are inaccurate due to different target object pose information, further causing the technical problems of material stretching and drifting.
Optionally, the target object may be flexibly set according to an actual application scenario. The target object can be applied to human life scenes and animal life scenes. In a human life scenario, the target object may be a specific part of the human being itself, such as: target objects, also particular ornaments to be worn by humans, such as: and sticking a hair band. In the life scene of the animal, the target object may be a specific part of the animal itself, such as: cat faces, also animal-worn specific ornaments such as: a collar.
Optionally, the target object image is obtained by capturing a target object through a smart terminal (e.g., a smart phone, a smart robot, a smart watch, a camera, an iPAD, etc.), and includes an image of the target object, for example, an image including a human face obtained by capturing a person, where the image type of the target image may be a color image, a black-and-white image, a photo image, a video image, etc.
Optionally, the target object posture information includes but is not limited to: posture information such as side face, head shaking, eye closing, head raising, head lowering and the like; the first model is a three-dimensional target object model in an initial posture, and the three-dimensional target object model in the initial posture is a standard three-dimensional target object model.
Alternatively, the material map may be a two-dimensional material map, for example, a two-dimensional planar material map, for example, any type of material map, such as lovely cartoon whiskers, pretty cartoon eyes, and beep cartoon mouths, characters and pictures.
Optionally, the target object mapping is to apply various material maps to the target object image by using an image algorithm, so as to achieve mapping effects such as beautifying, makeup, pasting paper, decoration and the like.
The embodiment of the invention adopts a standard three-dimensional target object model and various makeup materials matched with the standard three-dimensional target object model. In practical application, the posture of the standard three-dimensional target object model is adjusted according to the target object posture information in the obtained target object image, the adjusted three-dimensional target object model is projected onto a two-dimensional plane to obtain a two-dimensional material map, the material point locations and the target object key point locations in the two-dimensional material map can be matched more efficiently and accurately, and the target object map is obtained by performing mapping processing on the two-dimensional material map.
Therefore, the method and the device achieve the purpose of efficiently and accurately matching the material point locations and the target object key point locations in the two-dimensional material map, improve the matching accuracy of the map point locations, and avoid the problems of stretching of the two-dimensional material and drifting of the material, thereby realizing the technical effects of improving the product performance and the user experience of the intelligent makeup product, and further solving the technical problems of stretching of the two-dimensional material and drifting of the material caused by inaccurate matching of the map point locations in the target object map processing method in the related technology.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a computer-readable storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
According to an embodiment of the present invention, an embodiment of an apparatus for implementing the method for obtaining a target object map is further provided, and fig. 7 is a schematic structural diagram of an apparatus for obtaining a target object map according to an embodiment of the present invention, as shown in fig. 5, where the apparatus includes: an obtaining module 500, an adjusting module 502 and a processing module 504, wherein:
an obtaining module 500, configured to obtain target object posture information in a target object image; an adjusting module 502, configured to perform pose adjustment on a first model based on the pose information of the target object to obtain a material map corresponding to the target object image, where the first model is a three-dimensional target object model in an initial pose; and the processing module 504 is configured to perform mapping processing on the material map to obtain a target object map.
It is readily noted that the embodiment of the present invention employs a standard three-dimensional target object model, and various makeup materials matched with the standard three-dimensional target object model. In practical application, the posture of the standard three-dimensional target object model is adjusted according to the target object posture information in the obtained target object image, the adjusted three-dimensional target object model is projected onto a two-dimensional plane to obtain a two-dimensional material map, the material point locations and the target object key point locations in the two-dimensional material map can be matched more efficiently and accurately, and the target object map is obtained by performing mapping processing on the two-dimensional material map.
Therefore, the method and the device achieve the purpose of efficiently and accurately matching the material point locations and the target object key point locations in the two-dimensional material map, improve the matching accuracy of the map point locations, and avoid the problems of stretching of the two-dimensional material and drifting of the material, thereby realizing the technical effects of improving the product performance and the user experience of the intelligent makeup product, and further solving the technical problems of stretching of the two-dimensional material and drifting of the material caused by inaccurate matching of the map point locations in the target object map processing method in the related technology.
It should be noted here that the acquiring module 500, the adjusting module 502 and the processing module 504 correspond to steps S202 to S206 in embodiment 1, and the three modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
It should be noted that, reference may be made to the relevant description in embodiment 1 for a preferred implementation of this embodiment, and details are not described here again.
Example 3
According to an embodiment of the present invention, there is further provided an embodiment of an electronic device, which may be any one of computing devices in a computing device group. The electronic device includes: a processor and a memory, wherein:
a processor; and a memory, connected to the processor, for providing instructions to the processor for processing the following processing steps: step 1, acquiring target object posture information in a target object image; step 2, carrying out posture adjustment on a first model based on the posture information of the target object to obtain a material graph corresponding to the target object image, wherein the first model is a three-dimensional target object model in an initial posture; and 3, mapping the material map to obtain a target object map.
It is readily noted that the embodiment of the present invention employs a standard three-dimensional target object model, and various makeup materials matched with the standard three-dimensional target object model. In practical application, the posture of the standard three-dimensional target object model is adjusted according to the target object posture information in the obtained target object image, the adjusted three-dimensional target object model is projected onto a two-dimensional plane to obtain a two-dimensional material map, the material point locations and the target object key point locations in the two-dimensional material map can be matched more efficiently and accurately, and the target object map is obtained by performing mapping processing on the two-dimensional material map.
Therefore, the method and the device achieve the purpose of efficiently and accurately matching the material point locations and the target object key point locations in the two-dimensional material map, improve the matching accuracy of the map point locations, and avoid the problems of stretching of the two-dimensional material and drifting of the material, thereby realizing the technical effects of improving the product performance and the user experience of the intelligent makeup product, and further solving the technical problems of stretching of the two-dimensional material and drifting of the material caused by inaccurate matching of the map point locations in the target object map processing method in the related technology.
It should be noted that, reference may be made to the relevant description in embodiment 1 for a preferred implementation of this embodiment, and details are not described here again.
Example 4
According to an embodiment of the present invention, there may be provided an embodiment of a computer terminal, which may be any one computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute the program code of the following steps in the method for obtaining a target object map: acquiring target object posture information in a target object image; performing attitude adjustment on a first model based on the attitude information of the target object to obtain a material graph corresponding to the target object image, wherein the first model is a three-dimensional target object model in an initial attitude; and mapping the material map to obtain a target object map.
Alternatively, fig. 8 is a block diagram of a computer terminal according to an embodiment of the present invention. As shown in fig. 8, the computer terminal may include: one or more processors 602 (only one of which is shown),memory 604, and aperipherals interface 606.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the device and acquiring the target object map in the embodiment of the present invention, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, that is, the above-mentioned acquiring of the target object map is realized. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the computer terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: acquiring target object posture information in a target object image; performing attitude adjustment on a first model based on the attitude information of the target object to obtain a material graph corresponding to the target object image, wherein the first model is a three-dimensional target object model in an initial attitude; and mapping the material map to obtain a target object map.
Optionally, the processor may further execute the program code of the following steps: detecting a plurality of first key points from the target object image by using a third model, wherein the third model is a two-dimensional target object alignment model, and the plurality of first key points are target object key points in the target object image; and performing attitude estimation by adopting the plurality of first key points to obtain the attitude information of the target object.
Optionally, the processor may further execute the program code of the following steps: performing attitude adjustment on the first model based on the attitude information of the target object to obtain a second model, wherein the second model is a three-dimensional target object model of a target attitude; and determining the two-dimensional material map corresponding to the target object image by using the second model.
Optionally, the processor may further execute the program code of the following steps: acquiring a plurality of second key points from the first model, wherein the second key points are target object key points in the three-dimensional target object model in the initial posture; and performing posture adjustment on the three-dimensional coordinates of the plurality of second key points based on the posture information of the target object to obtain the second model.
Optionally, the processor may further execute the program code of the following steps: and projecting the second model to a preset two-dimensional plane to obtain the two-dimensional material map.
Optionally, the processor may further execute the program code of the following steps: acquiring a plurality of third key points from the material map, wherein the third key points are key points of a target object in the material map; and attaching the material map to the target object image by using the corresponding relationship between the first key points and the third key points to obtain the target object image.
Optionally, the processor may further execute the program code of the following steps: receiving a target object image from a client; acquiring target object posture information in the target object image, performing posture adjustment on a first model based on the target object posture information to obtain a material map corresponding to the target object image, and performing mapping processing on the material map to obtain a target object map, wherein the first model is a three-dimensional target object model in an initial posture; and sending the target object map to the client.
The embodiment of the invention provides a scheme for acquiring a target object map. It is readily noted that the embodiment of the present invention employs a standard three-dimensional target object model, and various makeup materials matched with the standard three-dimensional target object model. In practical application, the posture of the standard three-dimensional target object model is adjusted according to the target object posture information in the obtained target object image, the adjusted three-dimensional target object model is projected onto a two-dimensional plane to obtain a two-dimensional material map, the material point locations and the target object key point locations in the two-dimensional material map can be matched more efficiently and accurately, and the target object map is obtained by performing mapping processing on the two-dimensional material map.
Therefore, the method and the device achieve the purpose of efficiently and accurately matching the material point locations and the target object key point locations in the two-dimensional material map, improve the matching accuracy of the map point locations, and avoid the problems of stretching of the two-dimensional material and drifting of the material, thereby realizing the technical effects of improving the product performance and the user experience of the intelligent makeup product, and further solving the technical problems of stretching of the two-dimensional material and drifting of the material caused by inaccurate matching of the map point locations in the target object map processing method in the related technology.
It can be understood by those skilled in the art that the structure shown in fig. 8 is only an illustration, and the computer terminal may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 8 is a diagram illustrating a structure of the electronic device. For example, the computer terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 8, or have a different configuration than shown in FIG. 8.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the computer-readable storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 5
Embodiments of a computer-readable storage medium are also provided according to embodiments of the present invention. Optionally, in this embodiment, the computer-readable storage medium may be configured to store the program code executed by the method for obtaining the target object map provided in embodiment 1.
Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: acquiring target object posture information in a target object image; performing attitude adjustment on a first model based on the attitude information of the target object to obtain a material graph corresponding to the target object image, wherein the first model is a three-dimensional target object model in an initial attitude; and mapping the material map to obtain a target object map.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: detecting a plurality of first key points from the target object image by using a third model, wherein the third model is a two-dimensional target object alignment model, and the plurality of first key points are target object key points in the target object image; and performing attitude estimation by adopting the plurality of first key points to obtain the attitude information of the target object.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: performing attitude adjustment on the first model based on the attitude information of the target object to obtain a second model, wherein the second model is a three-dimensional target object model of a target attitude; and determining the two-dimensional material map corresponding to the target object image by using the second model.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: acquiring a plurality of second key points from the first model, wherein the second key points are target object key points in the three-dimensional target object model in the initial posture; and performing posture adjustment on the three-dimensional coordinates of the plurality of second key points based on the posture information of the target object to obtain the second model.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: and projecting the second model to a preset two-dimensional plane to obtain the two-dimensional material map.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: acquiring a plurality of third key points from the material map, wherein the third key points are key points of a target object in the material map; and attaching the material map to the target object image by using the corresponding relationship between the first key points and the third key points to obtain the target object image.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: receiving a target object image from a client; acquiring target object posture information in the target object image, performing posture adjustment on a first model based on the target object posture information to obtain a material map corresponding to the target object image, and performing mapping processing on the material map to obtain a target object map, wherein the first model is a three-dimensional target object model in an initial posture; and sending the target object map to the client.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a computer-readable storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned computer-readable storage media comprise: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (11)

Translated fromChinese
1.一种目标对象贴图的获取方法,其特征在于,包括:1. a kind of acquisition method of target object map is characterized in that, comprising:获取目标对象图像中的目标对象姿态信息;Obtain the target object pose information in the target object image;基于所述目标对象姿态信息对第一模型进行姿态调整,得到所述目标对象图像对应的素材图,其中,所述第一模型为初始姿态的三维目标对象模型;Perform attitude adjustment on the first model based on the target object attitude information to obtain a material map corresponding to the target object image, wherein the first model is a three-dimensional target object model with an initial attitude;对所述素材图进行贴图处理,得到目标对象贴图。The material map is subjected to texture processing to obtain a target object texture.2.根据权利要求1所述的目标对象贴图的获取方法,其特征在于,获取所述目标对象图像中的所述目标对象姿态信息包括:2. The method for obtaining a map of a target object according to claim 1, wherein obtaining the gesture information of the target object in the target object image comprises:利用第三模型从所述目标对象图像中检测得到多个第一关键点,其中,所述第三模型为二维目标对象对齐模型,所述多个第一关键点为所述目标对象图像中的目标对象关键点;A plurality of first key points are detected from the target object image by using a third model, wherein the third model is a two-dimensional target object alignment model, and the plurality of first key points are in the target object image. key points of the target object;采用所述多个第一关键点进行姿态估计,得到所述目标对象姿态信息。Using the plurality of first key points to perform attitude estimation, the attitude information of the target object is obtained.3.根据权利要求1所述的目标对象贴图的获取方法,其特征在于,基于所述目标对象姿态信息对所述第一模型进行姿态调整,得到所述素材图包括:3 . The method for acquiring a target object map according to claim 1 , wherein the posture adjustment of the first model based on the target object posture information, and obtaining the material map comprises: 3 .基于所述目标对象姿态信息对第一模型进行姿态调整,得到第二模型,其中,所述第二模型为目标姿态的三维目标对象模型;Perform attitude adjustment on the first model based on the target object attitude information to obtain a second model, wherein the second model is a three-dimensional target object model with a target attitude;利用所述第二模型确定所述目标对象图像对应的二维素材图。A two-dimensional material map corresponding to the target object image is determined by using the second model.4.根据权利要求3所述的目标对象贴图的获取方法,其特征在于,基于所述目标对象姿态信息对所述第一模型进行姿态调整,得到所述第二模型包括:4 . The method for obtaining a target object map according to claim 3 , wherein, performing posture adjustment on the first model based on the target object posture information, and obtaining the second model comprises: 5 .从所述第一模型中获取多个第二关键点,其中,所述多个第二关键点为所述初始姿态的三维目标对象模型中的目标对象关键点;Obtaining a plurality of second key points from the first model, wherein the plurality of second key points are target object key points in the three-dimensional target object model of the initial posture;基于所述目标对象姿态信息对所述多个第二关键点的三维坐标进行姿态调整,得到所述第二模型。The attitude adjustment is performed on the three-dimensional coordinates of the plurality of second key points based on the attitude information of the target object to obtain the second model.5.根据权利要求3所述的目标对象贴图的获取方法,其特征在于,利用所述第二模型确定所述目标对象图像对应的所述二维素材图包括:5 . The method for obtaining a target object map according to claim 3 , wherein using the second model to determine the two-dimensional material map corresponding to the target object image comprises: 5 .将所述第二模型投影至预设二维平面,得到所述二维素材图。Projecting the second model to a preset two-dimensional plane to obtain the two-dimensional material map.6.根据权利要求1所述的目标对象贴图的获取方法,其特征在于,对所述素材图进行贴图处理,得到所述目标对象贴图包括:6 . The method for acquiring a target object texture according to claim 1 , wherein, performing texture processing on the material map to obtain the target object texture comprises: 6 .从所述素材图中获取多个第三关键点,其中,所述多个第三关键点为所述素材图中目标对象关键点;Obtaining a plurality of third key points from the material map, wherein the plurality of third key points are key points of the target object in the material map;利用所述多个第一关键点与所述多个第三关键点之间的对应关系,对所述素材图贴至所述目标对象图像,得到所述目标对象贴图。Using the correspondence between the plurality of first key points and the plurality of third key points, the material map is pasted to the target object image to obtain the target object map.7.一种目标对象贴图的获取方法,其特征在于,包括:7. A method for obtaining a map of a target object, comprising:接收来自于客户端的目标对象图像;Receive the target object image from the client;获取所述目标对象图像中的目标对象姿态信息,基于所述目标对象姿态信息对第一模型进行姿态调整以得到所述目标对象图像对应的素材图,以及对所述素材图进行贴图处理以得到目标对象贴图,其中,所述第一模型为初始姿态的三维目标对象模型;Obtaining target object posture information in the target object image, performing posture adjustment on the first model based on the target object posture information to obtain a material map corresponding to the target object image, and performing texture processing on the material map to obtain A target object map, wherein the first model is a three-dimensional target object model of an initial posture;将所述目标对象贴图发送至所述客户端。The target object map is sent to the client.8.一种目标对象贴图的获取装置,其特征在于,包括:8. A device for obtaining a map of a target object, comprising:获取模块,用于获取目标对象图像中的目标对象姿态信息;an acquisition module, used to acquire the gesture information of the target object in the target object image;调整模块,用于基于所述目标对象姿态信息对第一模型进行姿态调整,得到所述目标对象图像对应的素材图,其中,所述第一模型为初始姿态的三维目标对象模型;an adjustment module, configured to adjust the attitude of the first model based on the attitude information of the target object to obtain a material map corresponding to the image of the target object, wherein the first model is a three-dimensional target object model with an initial attitude;处理模块,用于对所述素材图进行贴图处理,得到目标对象贴图。The processing module is configured to perform texture processing on the material map to obtain a target object texture.9.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括存储的程序,其中,在所述程序运行时控制所述计算机可读存储介质所在设备执行权利要求1至7中任意一项所述的目标对象贴图的获取方法。9. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein, when the program is run, a device where the computer-readable storage medium is located is controlled to execute claims 1 to 7 The acquisition method of the target object texture described in any one of the above.10.一种处理器,其特征在于,所述处理器用于运行程序,其中,所述程序运行时执行权利要求1至7中任意一项所述的目标对象贴图的获取方法。10 . A processor, wherein the processor is configured to run a program, wherein when the program runs, the method for acquiring a map of a target object according to any one of claims 1 to 7 is executed.11.一种电子设备,其特征在于,包括:11. An electronic device, characterized in that, comprising:处理器;以及processor; and存储器,与所述处理器连接,用于为所述处理器提供处理以下处理步骤的指令:a memory, connected to the processor, for providing the processor with instructions for processing the following processing steps:步骤1,获取目标对象图像中的目标对象姿态信息;Step 1, obtain the target object posture information in the target object image;步骤2,基于所述目标对象姿态信息对第一模型进行姿态调整,得到所述目标对象图像对应的素材图,其中,所述第一模型为初始姿态的三维目标对象模型;步骤3,对所述素材图进行贴图处理,得到目标对象贴图。Step 2, performing attitude adjustment on the first model based on the target object attitude information to obtain a material map corresponding to the target object image, wherein the first model is a three-dimensional target object model with an initial attitude; The above-mentioned material map is processed for texture, and the target object texture is obtained.
CN202111144643.XA2021-09-282021-09-28 Method, device, storage medium and electronic device for obtaining target object mapActiveCN114067090B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111144643.XACN114067090B (en)2021-09-282021-09-28 Method, device, storage medium and electronic device for obtaining target object map

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111144643.XACN114067090B (en)2021-09-282021-09-28 Method, device, storage medium and electronic device for obtaining target object map

Publications (2)

Publication NumberPublication Date
CN114067090Atrue CN114067090A (en)2022-02-18
CN114067090B CN114067090B (en)2025-06-27

Family

ID=80233817

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111144643.XAActiveCN114067090B (en)2021-09-282021-09-28 Method, device, storage medium and electronic device for obtaining target object map

Country Status (1)

CountryLink
CN (1)CN114067090B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108062791A (en)*2018-01-122018-05-22北京奇虎科技有限公司A kind of method and apparatus for rebuilding human face three-dimensional model
CN108898068A (en)*2018-06-062018-11-27腾讯科技(深圳)有限公司A kind for the treatment of method and apparatus and computer readable storage medium of facial image
CN109767487A (en)*2019-01-042019-05-17北京达佳互联信息技术有限公司Face three-dimensional rebuilding method, device, electronic equipment and storage medium
CN111402122A (en)*2020-03-202020-07-10北京字节跳动网络技术有限公司Image mapping processing method and device, readable medium and electronic equipment
CN111695431A (en)*2020-05-192020-09-22深圳禾思众成科技有限公司Face recognition method, face recognition device, terminal equipment and storage medium
CN111862287A (en)*2020-07-202020-10-30广州市百果园信息技术有限公司 Eye texture image generation method, texture mapping method, device and electronic device
CN112257657A (en)*2020-11-112021-01-22网易(杭州)网络有限公司Face image fusion method and device, storage medium and electronic equipment
CN112669447A (en)*2020-12-302021-04-16网易(杭州)网络有限公司Model head portrait creating method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108062791A (en)*2018-01-122018-05-22北京奇虎科技有限公司A kind of method and apparatus for rebuilding human face three-dimensional model
CN108898068A (en)*2018-06-062018-11-27腾讯科技(深圳)有限公司A kind for the treatment of method and apparatus and computer readable storage medium of facial image
CN109767487A (en)*2019-01-042019-05-17北京达佳互联信息技术有限公司Face three-dimensional rebuilding method, device, electronic equipment and storage medium
WO2020140832A1 (en)*2019-01-042020-07-09北京达佳互联信息技术有限公司Three-dimensional facial reconstruction method and apparatus, and electronic device and storage medium
CN111402122A (en)*2020-03-202020-07-10北京字节跳动网络技术有限公司Image mapping processing method and device, readable medium and electronic equipment
CN111695431A (en)*2020-05-192020-09-22深圳禾思众成科技有限公司Face recognition method, face recognition device, terminal equipment and storage medium
CN111862287A (en)*2020-07-202020-10-30广州市百果园信息技术有限公司 Eye texture image generation method, texture mapping method, device and electronic device
CN112257657A (en)*2020-11-112021-01-22网易(杭州)网络有限公司Face image fusion method and device, storage medium and electronic equipment
CN112669447A (en)*2020-12-302021-04-16网易(杭州)网络有限公司Model head portrait creating method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
余畅 等: "一种基于左右斜侧面照片的三维人脸重建方法", 《计算机应用研究》, no. 10, 31 December 2011 (2011-12-31)*

Also Published As

Publication numberPublication date
CN114067090B (en)2025-06-27

Similar Documents

PublicationPublication DateTitle
US10832039B2 (en)Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
CN108305312B (en)Method and device for generating 3D virtual image
CN109448099B (en)Picture rendering method and device, storage medium and electronic device
CN109427083B (en)Method, device, terminal and storage medium for displaying three-dimensional virtual image
EP3889915A2 (en)Method and apparatus for generating virtual avatar, device, medium and computer program product
EP2852935B1 (en)Systems and methods for generating a 3-d model of a user for a virtual try-on product
CN109671141B (en)Image rendering method and device, storage medium and electronic device
US20150235416A1 (en)Systems and methods for genterating a 3-d model of a virtual try-on product
CN110866977B (en) Augmented reality processing method and device, system, storage medium and electronic equipment
US10726580B2 (en)Method and device for calibration
CN106952221B (en) A three-dimensional Beijing opera facial makeup automatic makeup method
WO2018014766A1 (en)Generation method and apparatus and generation system for augmented reality module, and storage medium
CN108200337B (en) Method, device, terminal and storage medium for photographing processing
CN111192223B (en)Method, device and equipment for processing face texture image and storage medium
CN110096144B (en) A method and system for interactive holographic projection based on three-dimensional reconstruction
CN110148191A (en)The virtual expression generation method of video, device and computer readable storage medium
KR20250116702A (en) Method and apparatus for generating a 3D hand model, and electronic device
CN109360277B (en)Virtual simulation display control method and device, storage medium and electronic device
CN113450448B (en) Image processing method, device and system
CN111008966A (en) RGBD-based single-view anthropometric method, device, and computer-readable storage medium
WO2019096057A1 (en)Dynamic image generation method, and processing device
CN114067090A (en)Target object map obtaining method and device, storage medium and electronic equipment
CN113963139B (en) Virtual object processing method, device, storage medium and computer equipment
CN114332357B (en) Virtual image display method and device, storage medium and electronic device
HK40067037A (en)Target object sticker obtaining method and device, storage medium and electronic device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
REGReference to a national code

Ref country code:HK

Ref legal event code:DE

Ref document number:40067037

Country of ref document:HK

GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp