Disclosure of Invention
In view of the above, the present invention has been made to provide an implementation method, apparatus and computer-readable storage medium of an augmented reality AR that overcomes or at least partially solves the above problems.
According to an aspect of the present invention, there is provided a method for implementing an augmented reality AR, including:
acquiring a video stream acquired by a camera;
identifying a plane from the video stream;
and deploying at least part of AR elements in the AR model in the video stream by taking the plane as a reference plane of the AR model, generating the AR video stream and displaying the AR video stream on a display interface.
Optionally, the method further comprises:
identifying a target object from the video stream;
the deploying at least some AR elements in the AR model in the video stream comprises: judging whether the target object is matched with the deployment condition of any AR element in the AR model;
and finishing the deployment of the corresponding AR elements according to the matched deployment conditions.
Optionally, the method further comprises: recognizing the posture of the target object;
the deploying at least some AR elements of the AR model in the video stream further comprises: judging whether the posture of the target object is matched with the deployment condition of any one AR element in the AR model and/or judging whether the posture change of the target object is matched with the deployment condition of any one AR element;
and finishing the deployment of the corresponding AR elements according to the matched deployment conditions.
Optionally, the deployment condition of the AR element includes one or more of:
deploying undeployed AR elements;
deleting the deployed AR elements;
the presentation state of the deployed AR elements is changed.
Optionally, the completing the deployment of the corresponding AR element according to the matched deployment condition includes:
determining deployment parameters in the matched deployment conditions according to one or more of speed, amplitude and type of the attitude change of the target object, and/or determining deployment parameters in the matched deployment conditions according to the attitude of the target object.
Optionally, the method further comprises:
firstly, displaying the video stream on the display interface;
the taking the plane as a reference plane of the AR model comprises the following steps: when a plurality of identified planes exist, responding to a selection instruction on a display interface, determining a plane closest to the selection instruction, and taking the closest plane as a reference plane of the AR model.
According to another aspect of the present invention, there is provided an apparatus for implementing augmented reality AR, including:
the acquisition unit is suitable for acquiring the video stream acquired by the camera;
an identifying unit adapted to identify a plane from the video stream;
and the AR unit is suitable for deploying at least part of AR elements in the AR model in the video stream by taking the plane as a reference plane of the AR model, generating the AR video stream and displaying the AR video stream on a display interface.
Optionally, the identification unit is further adapted to identify an object from the video stream;
and the AR unit is suitable for judging whether the target object is matched with the deployment condition of any one AR element in the AR model or not and finishing the deployment of the corresponding AR element according to the matched deployment condition.
Optionally, the recognition unit is further adapted to recognize a gesture of the target object;
the AR unit is suitable for judging whether the posture of the target object is matched with the deployment condition of any one AR element in the AR model or not, and/or is suitable for judging whether the posture change of the target object is matched with the deployment condition of any one AR element or not, and the corresponding AR elements are deployed according to the matched deployment condition.
Optionally, the deployment condition of the AR element includes one or more of:
deploying undeployed AR elements;
deleting the deployed AR elements;
the presentation state of the deployed AR elements is changed.
Optionally, the AR unit is adapted to determine the deployment parameters in the matched deployment conditions according to one or more of a speed, an amplitude, and a type of the attitude change of the target object, and/or adapted to determine the deployment parameters in the matched deployment conditions according to the attitude of the target object.
Optionally, the AR unit is further adapted to display the video stream on the display interface, and when there are a plurality of identified planes, determine, in response to a selection instruction on the display interface, a plane closest to the selection instruction, and use the closest plane as a reference plane of the AR model.
According to a further aspect of the invention, there is provided a computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement a method as in any above.
According to the technical scheme, after the video stream acquired by the camera is acquired, the plane is identified as the reference plane of the AR model, and at least part of AR elements of the AR model are deployed in the video stream, so that the AR video stream capable of being displayed on the display interface is generated. According to the technical scheme, the plane in the real world is used as a link combined with the virtual world, so that the smart fusion of the virtual world and the real world is realized, the generated AR video stream is more natural, and the interestingness is higher.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a flowchart of a method for implementing an augmented reality AR according to an embodiment of the present invention. As shown in fig. 1, the method includes:
and step S110, acquiring the video stream collected by the camera.
Taking a mobile phone as an example of a device for running the AR application, a camera of the mobile phone collects video streams as real world data.
In step S120, a plane is identified from the video stream.
For example, a horizontal plane such as a floor, a table, or the like, or a vertical plane such as a wall, a mirror, or the like, a slide, a slope, or the like.
And step S130, deploying at least part of AR elements in the AR model in the video stream by taking the plane as a reference plane of the AR model, generating the AR video stream and displaying the AR video stream on a display interface.
An AR model is three-dimensional, and then the x axis, the y axis and the z axis of the AR model form three reference planes in pairs, and the identified plane can be coincided with any one of the reference planes, so that the fusion of the virtual world and the real world is realized.
As can be seen, in the method shown in fig. 1, after the video stream acquired by the camera is acquired, a plane is identified as a reference plane of the AR model, and at least part of AR elements of the AR model are deployed in the video stream, so as to generate an AR video stream that can be displayed on a display interface. According to the technical scheme, the plane in the real world is used as a link combined with the virtual world, so that the smart fusion of the virtual world and the real world is realized, the generated AR video stream is more natural, and the interestingness is higher.
In an embodiment of the present invention, the method further includes: identifying a target object from the video stream; deploying at least part of the AR elements in the AR model in the video stream comprises: judging whether the target object is matched with the deployment condition of any one AR element in the AR model; and finishing the deployment of the corresponding AR elements according to the matched deployment conditions.
For example the following scenarios: the AR model is a set of fountains, including sixteen water jets, and then the fountains are deployed outside the door after the open door is identified from the video stream, but only eight of them are deployed, depending on the door size constraints.
In an embodiment of the present invention, the method further includes: recognizing the posture of the target object; deploying at least a portion of the AR elements in the AR model in the video stream further comprises: judging whether the posture of the target object is matched with the deployment condition of any one AR element in the AR model and/or judging whether the posture change of the target object is matched with the deployment condition of any one AR element; and finishing the deployment of the corresponding AR elements according to the matched deployment conditions.
For example, if the target object is a human being, and "aircraft carrier style" is put out, the AR model, AR elements in the aircraft carrier, such as fighters, decks, etc., is deployed in the video stream. This is an AR element that determines deployment from a stationary pose. For another example, when the user swings left and right with both hands holding a stick-shaped object, two AR elements of a flag pole and a flag surface of the AR model are deployed in the video stream, specifically, the stick-shaped object is covered with the flag pole, and the flag surface swings with the swinging of both hands of the user.
In an embodiment of the present invention, in the above method, the deployment condition of the AR element includes one or more of the following: deploying undeployed AR elements; deleting the deployed AR elements; the presentation state of the deployed AR elements is changed.
For example, a user holds a normal hat, which is replaced with a magic hat in the AR video stream. When the user turns the hat for the first time, the hat is empty. When the user puts the hat on the head and takes it off again (posture change occurs), the hat is turned over again, and a rabbit is drilled from the inside — to achieve this, the AR element of the rabbit needs to be deployed subsequently. The user presses the rabbit back to the cap, blows a breath (changes posture), and the rabbit disappears when the cap is turned over again — to achieve this effect, the AR element of the rabbit needs to be deleted. For another example, a large and incomparable pillar is deployed on the ground, and when the user performs an action of pushing the pillar, the pillar is slowly inclined, which is an example of changing the display state of the deployed AR element.
In an embodiment of the present invention, in the above method, completing the deployment of the corresponding AR element according to the matched deployment condition includes: determining deployment parameters in the matched deployment conditions according to one or more of speed, amplitude and type of the attitude change of the target object, and/or determining deployment parameters in the matched deployment conditions according to the attitude of the target object.
For example, an AR element of playing cards is deployed on a desktop in front of the user. When a user takes a table, the playing cards fly and stand still in the air, and when the effect is achieved, the speed and the height of the flying playing cards can be determined according to the speed and the amplitude of the user taking the table. Yet another effect achieved from this AR element of playing cards may be the following example: when a user takes the playing cards to carry out fancy shuffling, a gorgeous shuffling effect is shown, which is different from the effect described above, but is realized on the basis of the same AR element, that is, the arrangement parameters in the arrangement conditions of the AR element are changed according to different types of font changes of the target object (person), and the AR effect is shown to be different.
The deployment parameters in the matched deployment conditions are determined according to the attitude of the target object, so that the deployment parameters are easy to understand, and the position, the take-off angle and the like of the fighter plane need to be determined according to the attitude of the user by taking the front 'aircraft carrier' as an example.
In an embodiment of the present invention, the method further includes: firstly, displaying a video stream on a display interface; the reference plane taking the plane as the AR model comprises the following steps: when a plurality of identified planes exist, a plane closest to a selection instruction is determined in response to the selection instruction on the display interface, and the closest plane is used as a reference plane of the AR model.
Sometimes, multiple planes, such as a desktop and a ground, are identified in a video stream, and if the effect of fixing a flag on a plane is to be achieved, a user needs to specifically place the flag on the desktop or the ground, at this time, the video stream is displayed on a display interface first, the user can click on the displayed desktop or the ground, and the background automatically determines the corresponding plane as the plane on which the flag is placed according to the click of the user.
Fig. 2 is a schematic structural diagram of an apparatus for implementing an augmented reality AR according to an embodiment of the present invention. As shown in fig. 2, anapparatus 200 for implementing an augmented reality AR includes:
the obtainingunit 210 is adapted to obtain a video stream collected by a camera.
Taking a mobile phone as an example of a device for running the AR application, a camera of the mobile phone collects video streams as real world data.
The identifyingunit 220 is adapted to identify a plane from the video stream.
For example, a horizontal plane such as a floor, a table, or the like, or a vertical plane such as a wall, a mirror, or the like, a slide, a slope, or the like.
And theAR unit 230 is adapted to deploy at least part of AR elements in the AR model in the video stream by using the plane as a reference plane of the AR model, generate an AR video stream, and display the AR video stream on the display interface.
An AR model is three-dimensional, and then the x axis, the y axis and the z axis of the AR model form three reference planes in pairs, and the identified plane can be coincided with any one of the reference planes, so that the fusion of the virtual world and the real world is realized.
As can be seen, in the apparatus shown in fig. 2, through the mutual cooperation of the units, after the video stream acquired by the camera is acquired, a plane is identified as a reference plane of the AR model, and at least part of AR elements of the AR model are deployed in the video stream, so as to generate an AR video stream that can be displayed on a display interface. According to the technical scheme, the plane in the real world is used as a link combined with the virtual world, so that the smart fusion of the virtual world and the real world is realized, the generated AR video stream is more natural, and the interestingness is higher.
In an embodiment of the present invention, in the above apparatus, the identifyingunit 220 is further adapted to identify the target object from the video stream; theAR unit 230 is adapted to determine whether the target object matches the deployment condition of any one of the AR elements in the AR model, and complete the deployment of the corresponding AR element according to the matched deployment condition.
For example the following scenarios: the AR model is a set of fountains, including sixteen water jets, and then the fountains are deployed outside the door after the open door is identified from the video stream, but only eight of them are deployed, depending on the door size constraints.
In an embodiment of the present invention, in the above apparatus, therecognition unit 220 is further adapted to recognize a gesture of the target object; theAR unit 230 is adapted to determine whether the posture of the target object matches the deployment condition of any one of the AR elements in the AR model, and/or determine whether the posture change of the target object matches the deployment condition of any one of the AR elements, and complete the deployment of the corresponding AR element according to the matched deployment condition.
For example, if the target object is a human being, and "aircraft carrier style" is put out, the AR model, AR elements in the aircraft carrier, such as fighters, decks, etc., is deployed in the video stream. This is an AR element that determines deployment from a stationary pose. For another example, when the user swings left and right with both hands holding a stick-shaped object, two AR elements of a flag pole and a flag surface of the AR model are deployed in the video stream, specifically, the stick-shaped object is covered with the flag pole, and the flag surface swings with the swinging of both hands of the user.
In an embodiment of the present invention, in the above apparatus, the deployment condition of the AR element includes one or more of: deploying undeployed AR elements; deleting the deployed AR elements; the presentation state of the deployed AR elements is changed.
For example, a user holds a normal hat, which is replaced with a magic hat in the AR video stream. When the user turns the hat for the first time, the hat is empty. When the user puts the hat on the head and takes it off again (posture change occurs), the hat is turned over again, and a rabbit is drilled from the inside — to achieve this, the AR element of the rabbit needs to be deployed subsequently. The user presses the rabbit back to the cap, blows a breath (changes posture), and the rabbit disappears when the cap is turned over again — to achieve this effect, the AR element of the rabbit needs to be deleted. For another example, a large and incomparable pillar is deployed on the ground, and when the user performs an action of pushing the pillar, the pillar is slowly inclined, which is an example of changing the display state of the deployed AR element.
In one embodiment of the present invention, in the above apparatus, theAR unit 230 is adapted to determine the deployment parameters in the matched deployment conditions according to one or more of speed, amplitude and type of the attitude change of the target object, and/or adapted to determine the deployment parameters in the matched deployment conditions according to the attitude of the target object.
For example, an AR element of playing cards is deployed on a desktop in front of the user. When a user takes a table, the playing cards fly and stand still in the air, and when the effect is achieved, the speed and the height of the flying playing cards can be determined according to the speed and the amplitude of the user taking the table. Yet another effect achieved from this AR element of playing cards may be the following example: when a user takes the playing cards to carry out fancy shuffling, a gorgeous shuffling effect is shown, which is different from the effect described above, but is realized on the basis of the same AR element, that is, the arrangement parameters in the arrangement conditions of the AR element are changed according to different types of font changes of the target object (person), and the AR effect is shown to be different.
The deployment parameters in the matched deployment conditions are determined according to the attitude of the target object, so that the deployment parameters are easy to understand, and the position, the take-off angle and the like of the fighter plane need to be determined according to the attitude of the user by taking the front 'aircraft carrier' as an example.
In an embodiment of the present invention, in the above apparatus, theAR unit 230 is adapted to display the video stream on the display interface, and when there are multiple identified planes, determine a plane closest to the selection instruction in response to the selection instruction on the display interface, and use the closest plane as a reference plane of the AR model.
Sometimes, multiple planes, such as a desktop and a ground, are identified in a video stream, and if the effect of fixing a flag on a plane is to be achieved, a user needs to specifically place the flag on the desktop or the ground, at this time, the video stream is displayed on a display interface first, the user can click on the displayed desktop or the ground, and the background automatically determines the corresponding plane as the plane on which the flag is placed according to the click of the user.
In summary, according to the technical solution of the present invention, after the video stream acquired by the camera is acquired, a plane is identified as a reference plane of the AR model, and at least part of AR elements of the AR model are deployed in the video stream, so as to generate an AR video stream that can be displayed on a display interface. According to the technical scheme, the plane in the real world is used as a link combined with the virtual world, so that the smart fusion of the virtual world and the real world is realized, the generated AR video stream is more natural, and the interestingness is higher.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in an implementation of an augmented reality AR according to an embodiment of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
Fig. 3 shows a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention. The computerreadable storage medium 300 stores computerreadable program code 310 for performing the steps of the method according to the invention, for example program code readable by a processor of an electronic device, which when executed by the electronic device causes the electronic device to perform the steps of the method described above, in particular program code stored by the computer readable storage medium may perform the method shown in any of the embodiments described above. The program code may be compressed in a suitable form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The embodiment of the invention discloses A1 and an implementation method of AR, comprising the following steps:
acquiring a video stream acquired by a camera;
identifying a plane from the video stream;
and deploying at least part of AR elements in the AR model in the video stream by taking the plane as a reference plane of the AR model, generating the AR video stream and displaying the AR video stream on a display interface.
A2, the method of a1, wherein the method further comprises:
identifying a target object from the video stream;
the deploying at least some AR elements in the AR model in the video stream comprises: judging whether the target object is matched with the deployment condition of any AR element in the AR model;
and finishing the deployment of the corresponding AR elements according to the matched deployment conditions.
A3, the method of a2, wherein the method further comprises: recognizing the posture of the target object;
the deploying at least some AR elements of the AR model in the video stream further comprises: judging whether the posture of the target object is matched with the deployment condition of any one AR element in the AR model and/or judging whether the posture change of the target object is matched with the deployment condition of any one AR element;
and finishing the deployment of the corresponding AR elements according to the matched deployment conditions.
A4, the method of a2 or A3, wherein the deployment conditions of the AR element include one or more of:
deploying undeployed AR elements;
deleting the deployed AR elements;
the presentation state of the deployed AR elements is changed.
A5, the method as in A3, wherein the completing the deployment of the corresponding AR elements according to the matched deployment conditions comprises:
determining deployment parameters in the matched deployment conditions according to one or more of speed, amplitude and type of the attitude change of the target object, and/or determining deployment parameters in the matched deployment conditions according to the attitude of the target object.
A6, the method of a1, wherein the method further comprises:
firstly, displaying the video stream on the display interface;
the taking the plane as a reference plane of the AR model comprises the following steps: when a plurality of identified planes exist, responding to a selection instruction on a display interface, determining a plane closest to the selection instruction, and taking the closest plane as a reference plane of the AR model.
The embodiment of the invention also discloses B7 and an implementation device of AR, comprising:
the acquisition unit is suitable for acquiring the video stream acquired by the camera;
an identifying unit adapted to identify a plane from the video stream;
and the AR unit is suitable for deploying at least part of AR elements in the AR model in the video stream by taking the plane as a reference plane of the AR model, generating the AR video stream and displaying the AR video stream on a display interface.
B8, the device of B7, wherein,
the identification unit is further adapted to identify a target object from the video stream;
and the AR unit is suitable for judging whether the target object is matched with the deployment condition of any one AR element in the AR model or not and finishing the deployment of the corresponding AR element according to the matched deployment condition.
B9, the device of B8, wherein,
the recognition unit is further suitable for recognizing the gesture of the target object;
the AR unit is suitable for judging whether the posture of the target object is matched with the deployment condition of any one AR element in the AR model or not, and/or is suitable for judging whether the posture change of the target object is matched with the deployment condition of any one AR element or not, and the corresponding AR elements are deployed according to the matched deployment condition.
B10, the apparatus of B8 or B9, wherein the deployment conditions of the AR element include one or more of:
deploying undeployed AR elements;
deleting the deployed AR elements;
the presentation state of the deployed AR elements is changed.
B11, the device of B9, wherein,
the AR unit is suitable for determining deployment parameters in the matched deployment conditions according to one or more of speed, amplitude and type of the posture change of the target object, and/or suitable for determining deployment parameters in the matched deployment conditions according to the posture of the target object.
B12, the device of B7, wherein,
the AR unit is further adapted to display the video stream on the display interface, when a plurality of identified planes exist, determine a plane closest to a selection instruction in response to the selection instruction on the display interface, and use the closest plane as a reference plane of the AR model.
Embodiments of the present invention also disclose C13, a computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement the method as described in any of a1-a 6.