Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
two-dimensional target object key points: the two-dimensional face key points are two-dimensional face key points, and the two-dimensional face key points can include: contour points such as a face frame, left and right eyebrows, left and right eye sockets, lips and the like.
Target object mapping: various material diagrams are pasted on the target object picture by adopting an image algorithm, so that the effects of beautifying, making up, pasting paper and the like are achieved.
In the related art, there are mainly a scheme of face mapping based on two-dimensional face key points and two-dimensional materials and a scheme of face mapping based on a three-dimensional face alignment model and three-dimensional materials, in which:
1) the scheme of the face mapping based on the two-dimensional face key points and the two-dimensional materials is to directly paste the two-dimensional materials on the face according to the two-dimensional face key points. Two-dimensional face materials are generally made based on a front face, so that the effect is best when the face is front. For the side face or the head-up and head-down conditions, the point positions on the material are not matched with the detected key point positions, so that the mapping effect has obvious stretching and drifting problems, and the use experience is influenced.
However, the face mapping effect of the scheme is not good, and the stretching and drifting problems exist because the influence of the face pose on key point positions is not considered. The material map is a front face two-dimensional face material generally, and when the actual face is not a front face such as a side face, the difference between the relative positions of the left and right contour points of the face and the contour points of the nose and mouth of the eyes is larger than that of the front face. When the front face material mapping is adopted, the problems of stretching and drifting caused by inaccurate point positions are difficult to avoid.
2) A human face mapping scheme based on a three-dimensional human face alignment model and three-dimensional materials is characterized in that a three-dimensional human face grid is directly output by the three-dimensional human face alignment model, the three-dimensional materials are mapped on the three-dimensional human face grid, and finally a two-dimensional picture is rendered according to the posture of a human face. According to the scheme, the point locations of the three-dimensional face grids correspond to the point locations of the three-dimensional materials one to one, and the problems of stretching and drifting caused by the fact that the map location does not correspond are solved well.
However, because of the need to base on a three-dimensional face alignment model, the computation load of the model is generally significantly higher than that of a two-dimensional face alignment model. For a user, on equipment with limited computing resources such as a mobile terminal, the running speed and the power consumption of the two-dimensional face alignment model are obviously inferior to those of a two-dimensional face alignment model. For developers, the three-dimensional face alignment model has high requirements on a data set, mainly depends on the three-dimensional face alignment model, has high development difficulty and high requirements on equipment performance, has a relatively narrow application range, and is difficult to cover common application scenes; the development and maintenance costs are also significantly higher than for two-dimensional face alignment models.
In view of the above reasons, in the current market, a two-dimensional face alignment model is still the mainstream, so the following technical scheme of how to implement a three-dimensional face mapping effect based on two-dimensional face key points is schematically illustrated by way of example, but the scheme of the present application is not limited to be applied to the two-dimensional face alignment model, and can also be applied to the three-dimensional face alignment model to implement the three-dimensional face mapping effect based on three-dimensional face key points.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for obtaining a target object map, where the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer executable instructions, and where a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than that illustrated herein.
The method provided by the embodiment 1 of the present application can be executed in a mobile terminal, a computer terminal or a similar computing device. Fig. 1 shows a hardware configuration block diagram of a computer terminal (or mobile device) for implementing the target object map acquisition method. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), amemory 104 for storing data, and a transmission module 106 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is merely illustrative and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
Thememory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the target object map obtaining method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in thememory 104, so as to achieve the above-mentioned target object map obtaining. Thememory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, thememory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
In the foregoing operating environment, the present invention provides a method for obtaining a target object map as shown in fig. 2, where fig. 2 is a flowchart of a method for obtaining a target object map according to an embodiment of the present invention, and as shown in fig. 2, the method for obtaining a target object map includes:
step S202, acquiring target object posture information in a target object image;
step S204, carrying out posture adjustment on the first model based on the posture information of the target object to obtain a material graph corresponding to the target object image;
and step S206, mapping the material map to obtain a target object map.
Optionally, the target object may be flexibly set according to an actual application scenario. The target object can be applied to human life scenes and animal life scenes. In a human life scenario, the target object may be a specific part of the human being itself, such as: target objects, also particular ornaments to be worn by humans, such as: and sticking a hair band. In the life scene of the animal, the target object may be a specific part of the animal itself, such as: cat faces, also animal-worn specific ornaments such as: a collar.
Optionally, the target object image is obtained by capturing a target object through a smart terminal (e.g., a smart phone, a smart robot, a smart watch, a camera, an iPAD, etc.), and includes an image of the target object, for example, an image including a human face obtained by capturing a person, where the image type of the target image may be a color image, a black-and-white image, a photo image, a video image, etc.
Optionally, the target object posture information includes but is not limited to: posture information such as side face, head shaking, eye closing, head raising, head lowering and the like; the first model is a three-dimensional target object model in an initial posture, and the three-dimensional target object model in the initial posture is a standard three-dimensional target object model.
Alternatively, the material map may be a two-dimensional material map, for example, a two-dimensional planar material map, for example, any type of material map, such as lovely cartoon whiskers, pretty cartoon eyes, and beep cartoon mouths, characters and pictures.
Optionally, the target object mapping is to apply various material maps to the target object image by using an image algorithm, so as to achieve mapping effects such as beautifying, makeup, pasting paper, decoration and the like.
It should be noted that the execution main body of the method for obtaining a target object map provided in the embodiment of the present application may be a Saas client, and the method applies the target object pose information in the actual target object image to the standard three-dimensional target object model by using the standard three-dimensional target object model to generate a two-dimensional planar material map, and then performs mapping processing to obtain the target object map, that is, the method obtains the corresponding two-dimensional planar material map according to the actual target object pose information processing, then performs mapping processing to obtain the target object map, instead of ignoring the actual situation, and directly performs mapping by using the two-dimensional material map of the front face of the person in the target object image, thereby avoiding the inaccuracy of the material point locations in the material map and the target object key point locations in the target object image due to the difference of the target object pose information, and further resulting in material stretching, The technical problem of drift.
Namely, the embodiment of the invention adopts a standard three-dimensional target object model and various makeup materials matched with the standard three-dimensional target object model. In practical application, the posture of the standard three-dimensional target object model is adjusted according to the target object posture information in the obtained target object image, the adjusted three-dimensional target object model is projected onto a two-dimensional plane to obtain a two-dimensional material map, the material point locations and the target object key point locations in the two-dimensional material map can be matched more efficiently and accurately, and the target object map is obtained by performing mapping processing on the two-dimensional material map.
Therefore, the method and the device achieve the purpose of efficiently and accurately matching the material point locations and the target object key point locations in the two-dimensional material map, improve the matching accuracy of the map point locations, and avoid the problems of stretching of the two-dimensional material and drifting of the material, thereby realizing the technical effects of improving the product performance and the user experience of the intelligent makeup product, and further solving the technical problems of stretching of the two-dimensional material and drifting of the material caused by inaccurate matching of the map point locations in the target object map processing method in the related technology.
As an alternative embodiment, as shown in fig. 3, acquiring the target object posture information in the target object image includes:
step S302, detecting a plurality of first key points from the target object image by using a third model;
step S304, performing attitude estimation by using the plurality of first key points to obtain the attitude information of the target object.
Optionally, the third model is a two-dimensional target object alignment model, and the plurality of first key points are target object key points in the target object image, for example, two-dimensional target object key points, that is, target object key points on the two-dimensional image. It should be noted that the number of the two-dimensional target object key points is generally tens to hundreds, the coordinates of each target object key point are formed by two dimensions of x and y, and each point represents a specific position on the target object, such as a face frame, left and right eyebrows, left and right eye sockets, lips, and other contour points. In the scheme of the application, the attitude estimation can be performed by using, but not limited to, 280 two-dimensional target object key points to obtain the target object attitude information.
As an alternative embodiment, as also shown in fig. 3, performing pose adjustment on the first model based on the pose information of the target object to obtain the material map includes:
step S402, carrying out attitude adjustment on the first model based on the attitude information of the target object to obtain a second model, wherein the second model is a three-dimensional target object model of a target attitude;
step S404, determining a two-dimensional material map corresponding to the target object image by using the second model.
Alternatively, the second model is a three-dimensional target object model in a target posture, for example, a three-dimensional target object model in a head-up posture, a three-dimensional target object model in a head-down posture, and the like.
In the above optional embodiment, in the actual application scenario, the target object posture information in the target object image is changed in real time, so that in an actual situation, the current target object posture information is adopted to perform posture adjustment on the first model to obtain a three-dimensional target object model of the target posture, that is, a three-dimensional target object model of the final posture (that is, an adjusted three-dimensional model), and the two-dimensional material map corresponding to the target object image is determined according to the three-dimensional target object model of the final posture.
Taking the target object image as a face image, taking the target object pose information as face pose information as an example, under the condition that the face in the face image rotates by a larger angle, the silkworm sleeping effect completely based on two-dimensional face key points can generate the phenomenon of obvious drift stretching as shown in fig. 4, without considering the face pose information, because the problem that material point positions in a two-dimensional material map and face key point positions in the face image are not accurate can occur when the standard pixel material pointed by an arrow shown in fig. 4 is adopted for face mapping, but the technical scheme of the three-dimensional face mapping provided by the embodiment of the application can realize the face mapping effect as shown in fig. 5, because the face pose information is applied to a standard three-dimensional face model, projected to a two-dimensional plane to obtain an adjusted two-dimensional material map, and pointed by the arrow shown in fig. 5, the adjusted two-dimensional material map is subjected to face mapping, so that the technical problems of material stretching and drifting caused by inaccurate material point location in the two-dimensional material map and human face key point location in the human face image due to different human face posture information are effectively solved.
As an alternative embodiment, performing pose adjustment on the first model based on the pose information of the target object to obtain the second model includes:
step S502, a plurality of second key points are obtained from the first model, wherein the second key points are the key points of the target object in the three-dimensional target object model of the initial posture;
step S504 is performed to perform pose adjustment on the three-dimensional coordinates of the plurality of second key points based on the pose information of the target object, so as to obtain the second model.
Optionally, the plurality of second keypoints are target object keypoints in the three-dimensional target object model in the initial pose, for example, two-dimensional target object keypoints, coordinates of each target object keypoint are formed by two dimensions, x and y, and each point represents a specific position on the target object, such as contour points of a face frame, left and right eyebrows, left and right eye sockets, and lips. In the embodiment of the present application, the second model, that is, the three-dimensional target object model in the target posture, is obtained by performing posture adjustment on the three-dimensional coordinates of the plurality of second key points according to the target object posture information in the actual target object image.
As an alternative embodiment, the determining the two-dimensional material map corresponding to the target object image by using the second model includes:
step S602, projecting the second model to a preset two-dimensional plane to obtain the two-dimensional material map.
In the above optional embodiment, the two-dimensional material map and the corresponding key points of the standard target object may be obtained by projecting the second model onto a preset two-dimensional plane, and the two-dimensional material may be accurately pasted onto the image according to the key points of the two-dimensional target object output by the target object alignment model (the key points are in one-to-one correspondence with the key points of the standard target object), so as to obtain the target object map.
As an alternative embodiment, as shown in fig. 3, the step of performing mapping processing on the material map to obtain the target object map includes:
step S702, acquiring a plurality of third key points from the material map, wherein the third key points are key points of a target object in the material map;
step S704, using the correspondence between the plurality of first key points and the plurality of third key points, attaches the material map to the target object image, and obtains the target object map.
The scheme of the application adopts a standard target object three-dimensional model material and various makeup material graphs matched with the standard target object model. In actual application, the posture of the standard target object model is adjusted according to target object posture information output by the target object alignment model, and the adjusted three-dimensional model is projected onto a preset two-dimensional plane to obtain a two-dimensional material map and a plurality of corresponding key points of the standard target object; and acquiring a plurality of two-dimensional target object key points from the material map through a target object alignment model, and attaching the material map to the target object image by using the corresponding relation between the two-dimensional target object key points and the two-dimensional target object key points to obtain the target object map. According to the scheme, the problems of material stretching and drifting caused by inaccurate point positions can be effectively avoided when standard two-dimensional target object materials are subjected to mapping.
As another alternative embodiment, the target object map obtaining method provided by the present application scheme also uses standard target object three-dimensional model materials and various makeup materials matched with the standard target object model based on the three-dimensional target object mesh and the target object map of the three-dimensional materials. The difference is that the target object map obtaining scheme depends on a three-dimensional target object alignment model, three-dimensional target object grids or three-dimensional target objects are output, three-dimensional target object materials are directly pasted on the three-dimensional target objects, and then the three-dimensional target objects are projected to a preset two-dimensional plane to be rendered. However, it should be noted that, in general, the development difficulty and the calculation amount of the three-dimensional target object alignment model are significantly higher than those of the two-dimensional target object alignment model, and particularly at the mobile end, the three-dimensional target object alignment model is not yet popularized, and the application of the two-dimensional target object alignment model is wider.
The present application further provides a method for obtaining a target object map as shown in fig. 6, where fig. 6 is a flowchart of another method for obtaining a target object map according to an embodiment of the present invention, and as shown in fig. 6, the method for obtaining a target object map includes:
step S802, receiving a target object image from a client;
step S804, obtaining pose information of a target object in the target object image, performing pose adjustment on a first model based on the pose information of the target object to obtain a material map corresponding to the target object image, and performing mapping processing on the material map to obtain a target object map, wherein the first model is a three-dimensional target object model in an initial pose;
step S806, sending the target object map to the client.
It should be noted that the execution main body of the method for obtaining a target object map provided in the embodiment of the present application may be a Saas server, and the method includes receiving a target object image from a client, using a standard three-dimensional target object model, applying target object pose information in an actual target object image to the standard three-dimensional target object model, generating a two-dimensional planar material map, performing mapping processing to obtain a target object map, and sending the target object map to the Saas client, so as to achieve the purpose that a corresponding two-dimensional planar material map is obtained according to the actual target object pose information processing, then performing mapping processing to obtain the target object map, instead of neglecting the actual situation, directly using a two-dimensional material map of a front face of a person in the target object image to perform mapping, thereby avoiding that material point locations in the material map and target object key point locations in the target object image are inaccurate due to different target object pose information, further causing the technical problems of material stretching and drifting.
Optionally, the target object may be flexibly set according to an actual application scenario. The target object can be applied to human life scenes and animal life scenes. In a human life scenario, the target object may be a specific part of the human being itself, such as: target objects, also particular ornaments to be worn by humans, such as: and sticking a hair band. In the life scene of the animal, the target object may be a specific part of the animal itself, such as: cat faces, also animal-worn specific ornaments such as: a collar.
Optionally, the target object image is obtained by capturing a target object through a smart terminal (e.g., a smart phone, a smart robot, a smart watch, a camera, an iPAD, etc.), and includes an image of the target object, for example, an image including a human face obtained by capturing a person, where the image type of the target image may be a color image, a black-and-white image, a photo image, a video image, etc.
Optionally, the target object posture information includes but is not limited to: posture information such as side face, head shaking, eye closing, head raising, head lowering and the like; the first model is a three-dimensional target object model in an initial posture, and the three-dimensional target object model in the initial posture is a standard three-dimensional target object model.
Alternatively, the material map may be a two-dimensional material map, for example, a two-dimensional planar material map, for example, any type of material map, such as lovely cartoon whiskers, pretty cartoon eyes, and beep cartoon mouths, characters and pictures.
Optionally, the target object mapping is to apply various material maps to the target object image by using an image algorithm, so as to achieve mapping effects such as beautifying, makeup, pasting paper, decoration and the like.
The embodiment of the invention adopts a standard three-dimensional target object model and various makeup materials matched with the standard three-dimensional target object model. In practical application, the posture of the standard three-dimensional target object model is adjusted according to the target object posture information in the obtained target object image, the adjusted three-dimensional target object model is projected onto a two-dimensional plane to obtain a two-dimensional material map, the material point locations and the target object key point locations in the two-dimensional material map can be matched more efficiently and accurately, and the target object map is obtained by performing mapping processing on the two-dimensional material map.
Therefore, the method and the device achieve the purpose of efficiently and accurately matching the material point locations and the target object key point locations in the two-dimensional material map, improve the matching accuracy of the map point locations, and avoid the problems of stretching of the two-dimensional material and drifting of the material, thereby realizing the technical effects of improving the product performance and the user experience of the intelligent makeup product, and further solving the technical problems of stretching of the two-dimensional material and drifting of the material caused by inaccurate matching of the map point locations in the target object map processing method in the related technology.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a computer-readable storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
According to an embodiment of the present invention, an embodiment of an apparatus for implementing the method for obtaining a target object map is further provided, and fig. 7 is a schematic structural diagram of an apparatus for obtaining a target object map according to an embodiment of the present invention, as shown in fig. 5, where the apparatus includes: an obtaining module 500, an adjusting module 502 and a processing module 504, wherein:
an obtaining module 500, configured to obtain target object posture information in a target object image; an adjusting module 502, configured to perform pose adjustment on a first model based on the pose information of the target object to obtain a material map corresponding to the target object image, where the first model is a three-dimensional target object model in an initial pose; and the processing module 504 is configured to perform mapping processing on the material map to obtain a target object map.
It is readily noted that the embodiment of the present invention employs a standard three-dimensional target object model, and various makeup materials matched with the standard three-dimensional target object model. In practical application, the posture of the standard three-dimensional target object model is adjusted according to the target object posture information in the obtained target object image, the adjusted three-dimensional target object model is projected onto a two-dimensional plane to obtain a two-dimensional material map, the material point locations and the target object key point locations in the two-dimensional material map can be matched more efficiently and accurately, and the target object map is obtained by performing mapping processing on the two-dimensional material map.
Therefore, the method and the device achieve the purpose of efficiently and accurately matching the material point locations and the target object key point locations in the two-dimensional material map, improve the matching accuracy of the map point locations, and avoid the problems of stretching of the two-dimensional material and drifting of the material, thereby realizing the technical effects of improving the product performance and the user experience of the intelligent makeup product, and further solving the technical problems of stretching of the two-dimensional material and drifting of the material caused by inaccurate matching of the map point locations in the target object map processing method in the related technology.
It should be noted here that the acquiring module 500, the adjusting module 502 and the processing module 504 correspond to steps S202 to S206 in embodiment 1, and the three modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
It should be noted that, reference may be made to the relevant description in embodiment 1 for a preferred implementation of this embodiment, and details are not described here again.
Example 3
According to an embodiment of the present invention, there is further provided an embodiment of an electronic device, which may be any one of computing devices in a computing device group. The electronic device includes: a processor and a memory, wherein:
a processor; and a memory, connected to the processor, for providing instructions to the processor for processing the following processing steps: step 1, acquiring target object posture information in a target object image; step 2, carrying out posture adjustment on a first model based on the posture information of the target object to obtain a material graph corresponding to the target object image, wherein the first model is a three-dimensional target object model in an initial posture; and 3, mapping the material map to obtain a target object map.
It is readily noted that the embodiment of the present invention employs a standard three-dimensional target object model, and various makeup materials matched with the standard three-dimensional target object model. In practical application, the posture of the standard three-dimensional target object model is adjusted according to the target object posture information in the obtained target object image, the adjusted three-dimensional target object model is projected onto a two-dimensional plane to obtain a two-dimensional material map, the material point locations and the target object key point locations in the two-dimensional material map can be matched more efficiently and accurately, and the target object map is obtained by performing mapping processing on the two-dimensional material map.
Therefore, the method and the device achieve the purpose of efficiently and accurately matching the material point locations and the target object key point locations in the two-dimensional material map, improve the matching accuracy of the map point locations, and avoid the problems of stretching of the two-dimensional material and drifting of the material, thereby realizing the technical effects of improving the product performance and the user experience of the intelligent makeup product, and further solving the technical problems of stretching of the two-dimensional material and drifting of the material caused by inaccurate matching of the map point locations in the target object map processing method in the related technology.
It should be noted that, reference may be made to the relevant description in embodiment 1 for a preferred implementation of this embodiment, and details are not described here again.
Example 4
According to an embodiment of the present invention, there may be provided an embodiment of a computer terminal, which may be any one computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute the program code of the following steps in the method for obtaining a target object map: acquiring target object posture information in a target object image; performing attitude adjustment on a first model based on the attitude information of the target object to obtain a material graph corresponding to the target object image, wherein the first model is a three-dimensional target object model in an initial attitude; and mapping the material map to obtain a target object map.
Alternatively, fig. 8 is a block diagram of a computer terminal according to an embodiment of the present invention. As shown in fig. 8, the computer terminal may include: one or more processors 602 (only one of which is shown),memory 604, and aperipherals interface 606.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the device and acquiring the target object map in the embodiment of the present invention, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, that is, the above-mentioned acquiring of the target object map is realized. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the computer terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: acquiring target object posture information in a target object image; performing attitude adjustment on a first model based on the attitude information of the target object to obtain a material graph corresponding to the target object image, wherein the first model is a three-dimensional target object model in an initial attitude; and mapping the material map to obtain a target object map.
Optionally, the processor may further execute the program code of the following steps: detecting a plurality of first key points from the target object image by using a third model, wherein the third model is a two-dimensional target object alignment model, and the plurality of first key points are target object key points in the target object image; and performing attitude estimation by adopting the plurality of first key points to obtain the attitude information of the target object.
Optionally, the processor may further execute the program code of the following steps: performing attitude adjustment on the first model based on the attitude information of the target object to obtain a second model, wherein the second model is a three-dimensional target object model of a target attitude; and determining the two-dimensional material map corresponding to the target object image by using the second model.
Optionally, the processor may further execute the program code of the following steps: acquiring a plurality of second key points from the first model, wherein the second key points are target object key points in the three-dimensional target object model in the initial posture; and performing posture adjustment on the three-dimensional coordinates of the plurality of second key points based on the posture information of the target object to obtain the second model.
Optionally, the processor may further execute the program code of the following steps: and projecting the second model to a preset two-dimensional plane to obtain the two-dimensional material map.
Optionally, the processor may further execute the program code of the following steps: acquiring a plurality of third key points from the material map, wherein the third key points are key points of a target object in the material map; and attaching the material map to the target object image by using the corresponding relationship between the first key points and the third key points to obtain the target object image.
Optionally, the processor may further execute the program code of the following steps: receiving a target object image from a client; acquiring target object posture information in the target object image, performing posture adjustment on a first model based on the target object posture information to obtain a material map corresponding to the target object image, and performing mapping processing on the material map to obtain a target object map, wherein the first model is a three-dimensional target object model in an initial posture; and sending the target object map to the client.
The embodiment of the invention provides a scheme for acquiring a target object map. It is readily noted that the embodiment of the present invention employs a standard three-dimensional target object model, and various makeup materials matched with the standard three-dimensional target object model. In practical application, the posture of the standard three-dimensional target object model is adjusted according to the target object posture information in the obtained target object image, the adjusted three-dimensional target object model is projected onto a two-dimensional plane to obtain a two-dimensional material map, the material point locations and the target object key point locations in the two-dimensional material map can be matched more efficiently and accurately, and the target object map is obtained by performing mapping processing on the two-dimensional material map.
Therefore, the method and the device achieve the purpose of efficiently and accurately matching the material point locations and the target object key point locations in the two-dimensional material map, improve the matching accuracy of the map point locations, and avoid the problems of stretching of the two-dimensional material and drifting of the material, thereby realizing the technical effects of improving the product performance and the user experience of the intelligent makeup product, and further solving the technical problems of stretching of the two-dimensional material and drifting of the material caused by inaccurate matching of the map point locations in the target object map processing method in the related technology.
It can be understood by those skilled in the art that the structure shown in fig. 8 is only an illustration, and the computer terminal may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 8 is a diagram illustrating a structure of the electronic device. For example, the computer terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 8, or have a different configuration than shown in FIG. 8.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the computer-readable storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 5
Embodiments of a computer-readable storage medium are also provided according to embodiments of the present invention. Optionally, in this embodiment, the computer-readable storage medium may be configured to store the program code executed by the method for obtaining the target object map provided in embodiment 1.
Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: acquiring target object posture information in a target object image; performing attitude adjustment on a first model based on the attitude information of the target object to obtain a material graph corresponding to the target object image, wherein the first model is a three-dimensional target object model in an initial attitude; and mapping the material map to obtain a target object map.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: detecting a plurality of first key points from the target object image by using a third model, wherein the third model is a two-dimensional target object alignment model, and the plurality of first key points are target object key points in the target object image; and performing attitude estimation by adopting the plurality of first key points to obtain the attitude information of the target object.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: performing attitude adjustment on the first model based on the attitude information of the target object to obtain a second model, wherein the second model is a three-dimensional target object model of a target attitude; and determining the two-dimensional material map corresponding to the target object image by using the second model.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: acquiring a plurality of second key points from the first model, wherein the second key points are target object key points in the three-dimensional target object model in the initial posture; and performing posture adjustment on the three-dimensional coordinates of the plurality of second key points based on the posture information of the target object to obtain the second model.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: and projecting the second model to a preset two-dimensional plane to obtain the two-dimensional material map.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: acquiring a plurality of third key points from the material map, wherein the third key points are key points of a target object in the material map; and attaching the material map to the target object image by using the corresponding relationship between the first key points and the third key points to obtain the target object image.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: receiving a target object image from a client; acquiring target object posture information in the target object image, performing posture adjustment on a first model based on the target object posture information to obtain a material map corresponding to the target object image, and performing mapping processing on the material map to obtain a target object map, wherein the first model is a three-dimensional target object model in an initial posture; and sending the target object map to the client.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a computer-readable storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned computer-readable storage media comprise: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.