Disclosure of Invention
The technical problems to be solved by the invention are as follows: the method has the advantages that the coloring of the generated countermeasure network has the effects of high coloring speed and smoothness, the color integrity of the colored augmented reality target object can be ensured in translation and scaling, and the vivid coloring of different styles can be performed according to different requirements. The coloring method supports machine learning, can learn composition, drawing and color habit of different designers to form a style model, and can realize rendering coloring according to the style influence; the coloring method adopts parallel design, and can realize quick rendering and coloring; the method of fusing the virtual object and the video stream background can be used for tracking and positioning the object, so that real-time coloring is achieved.
In order to achieve the above object, the present invention provides a home decoration design-oriented augmented reality image coloring method based on a generated countermeasure network technology, comprising the steps of:
s101: collecting real-time video;
s102: scanning the digitized marker;
s103: identifying the marker by an augmented reality procedure;
s104: the marker is matched with the three-dimensional virtual object;
s105: adjusting the position of the three-dimensional virtual model according to the position of the marker;
s106: determining style requirements;
s107: matching a pre-training coloring model library;
s108: the virtual object is fused with the video stream background;
s109: the virtual object is colored into the video stream.
The first implementation method of the matching pre-training coloring model library comprises the following steps:
s201: inputting the vertex coordinates of the three-dimensional model to be colored;
s202: placing the three-dimensional model in a three-dimensional scene verification position;
s203: setting the angle and the visual angle of a camera;
s204: setting the position, color and direction parameters of illumination;
s205: setting color parameters of the three-dimensional model;
s206: inputting the colored model to generate an countermeasure network model;
s207: the model passing through the discrimination network is stored in a pre-training model library.
In the invention, a matching pre-training coloring model library implementation method II comprises the following steps:
s401: inputting different types of images of different authors;
s402: generating an antagonism network model from the input image;
s403: the model passing through the discrimination network is stored in a pre-training model library.
In the invention, a method for fusing a virtual object with a video stream background comprises the following steps:
s601: identifying a background object profile of the video stream using an identification program;
s602: extracting the position coordinate of a background object of the video stream;
s603: and displaying the virtual object on the background object of the video stream in a superimposed manner by taking the position coordinates as reference points.
The home decoration design-oriented augmented reality image coloring method based on the generation countermeasure network technology has the characteristics and beneficial effects that:
1. the method used in the method is characterized in that the coloring time is in millisecond level, and the rapid coloring can be realized;
2. the method disclosed by the invention can realize tracking and positioning of the object by using the method of fusing the virtual object and the video stream background, so that real-time coloring is achieved;
3. the method can realize the style coloring of different authors according to different requirements;
4. the method of the present invention uses a pre-trained coloring model for generating an countermeasure network, which can be invoked more quickly than manual feature coloring.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The invention relates to an augmented reality image coloring method based on a generated countermeasure network technology for home decoration design, which comprises the following steps of, as shown in fig. 1:
s101, acquiring real-time video streams by using video acquisition equipment;
s102, scanning the digital marker;
s103, identifying the marker through an augmented reality program, and primarily determining the position and the direction of the three-dimensional virtual object;
s104, matching the identifier with the three-dimensional virtual object;
s105, adjusting the position of the three-dimensional virtual model again according to the position of the marker;
s106, determining style requirements;
s107, matching a pre-training coloring model library;
s108, fusing the virtual object with the video stream background by using a contour method, and determining the position to track and position the object;
s109, coloring the virtual object into the video stream.
Compared with a calling mode of storing textures in a video memory, the pre-training coloring method provided by the invention has the advantages that the pre-training method is used for accelerating the coloring speed, and the pre-training coloring model library is matched to quickly call the existing models.
In the invention, the implementation method of the matching pre-training coloring model library comprises the following steps of:
s201, inputting the vertex coordinates of a three-dimensional model to be colored;
s202, placing the three-dimensional model at a three-dimensional scene verification position;
s203, setting a camera angle and a view angle in a scene;
s204, setting the position, color and direction parameters of illumination;
s205, setting color parameters of the three-dimensional model;
s206, inputting the colored model to generate an countermeasure network model;
s207, storing the model passing through the discrimination network into a pre-training model library.
The method for inputting the colored model into the generated countermeasure network model comprises the following steps of:
s301, firstly, inputting a colored three-dimensional model;
s302, storing the color model into a discrimination network model library, wherein the discrimination network stores a color model with set parameters;
s303, generating a network output single three-dimensional model;
s304, the discrimination network calculates the similarity value of the generated model and the model library;
and S305, if the similarity value of the generated model and the model library is larger than or equal to a preset threshold value, judging that the generated colored three-dimensional model is close to a real model. And if the similarity value is smaller than a preset threshold value, judging that the three-dimensional model colored on the network is generated as an unreal model. Repeating the step S303 and the step S304 until the judgment network gives that the generated coloring model is true;
s306, outputting the three-dimensional coloring model passing through the step S405.
In the invention, a second implementation method of matching the pre-training coloring model library comprises the following steps, as shown in fig. 4:
s401, inputting different types of images of different known authors;
s402, generating an antagonism network model from the input image;
s403, storing the model passing through the discrimination network in a pre-training model library.
Generating an countermeasure network model from the input image includes the steps of:
s501, inputting a finished product image model;
s502, storing the color model into a discrimination network model library, wherein the discrimination network stores a color model with set parameters;
s503, generating a network output single model;
s504, the discrimination network calculates the similarity value of the generated model and the model library;
s505, if the similarity value of the generated model and the model library is larger than or equal to a preset threshold value, judging that the generated colored image model is close to a real model. And if the similarity value is smaller than a preset threshold value, judging that the image model colored on the network is generated as an unreal model. And repeating the steps of S503 and S504 until the judgment network gives that the generated coloring model is true.
The pre-training coloring model library provided by the invention is realized by using a deep learning-based generation countermeasure network. The coloring method provided by the invention can track the real-time fusion and scene of the target.
The method for fusing the virtual object and the video stream background provided by the invention comprises the following steps of:
s601, recognizing the background object outline of the video stream by using a recognition program;
s602, extracting the position coordinates of a background object of the video stream;
s603, superposing the virtual object on the video stream background object by taking the position coordinates as reference points;
in addition to the implementation method, the invention can also have other implementation modes, and all the technical schemes which are formed by adopting equivalent substitution or equivalent transformation belong to the protection scope of the invention.