Movatterモバイル変換


[0]ホーム

URL:


CN108090968B - Method and device for realizing augmented reality AR and computer readable storage medium - Google Patents

Method and device for realizing augmented reality AR and computer readable storage medium
Download PDF

Info

Publication number
CN108090968B
CN108090968BCN201711483513.2ACN201711483513ACN108090968BCN 108090968 BCN108090968 BCN 108090968BCN 201711483513 ACN201711483513 ACN 201711483513ACN 108090968 BCN108090968 BCN 108090968B
Authority
CN
China
Prior art keywords
video stream
deployment
model
matched
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711483513.2A
Other languages
Chinese (zh)
Other versions
CN108090968A (en
Inventor
杨颖慧
李海燕
戴星阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangrui Hengyu Beijing Technology Co ltd
Original Assignee
Guangrui Hengyu Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangrui Hengyu Beijing Technology Co ltdfiledCriticalGuangrui Hengyu Beijing Technology Co ltd
Priority to CN201711483513.2ApriorityCriticalpatent/CN108090968B/en
Publication of CN108090968ApublicationCriticalpatent/CN108090968A/en
Application grantedgrantedCritical
Publication of CN108090968BpublicationCriticalpatent/CN108090968B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a method and a device for realizing Augmented Reality (AR) and a computer-readable storage medium. The method comprises the following steps: acquiring a video stream acquired by a camera; identifying a plane from the video stream; and deploying at least part of AR elements in the AR model in the video stream by taking the plane as a reference plane of the AR model, generating the AR video stream and displaying the AR video stream on a display interface. According to the technical scheme, the plane in the real world is used as a link combined with the virtual world, so that the smart fusion of the virtual world and the real world is realized, the generated AR video stream is more natural, and the interestingness is higher.

Description

Method and device for realizing augmented reality AR and computer readable storage medium
Technical Field
The invention relates to the field of augmented reality, in particular to a method and a device for realizing augmented reality AR and a computer readable storage medium.
Background
AR (Augmented Reality) is a technology for calculating the position and angle of a camera image in real time and adding corresponding images, videos, and 3D models, and the purpose of the technology is to overlap a virtual world on a screen in the real world and perform interaction. Generally, the AR is preset with an AR model, but how to combine the AR model with a real scene to achieve a better effect is a problem to be solved.
Disclosure of Invention
In view of the above, the present invention has been made to provide an implementation method, apparatus and computer-readable storage medium of an augmented reality AR that overcomes or at least partially solves the above problems.
According to an aspect of the present invention, there is provided a method for implementing an augmented reality AR, including:
acquiring a video stream acquired by a camera;
identifying a plane from the video stream;
and deploying at least part of AR elements in the AR model in the video stream by taking the plane as a reference plane of the AR model, generating the AR video stream and displaying the AR video stream on a display interface.
Optionally, the method further comprises:
identifying a target object from the video stream;
the deploying at least some AR elements in the AR model in the video stream comprises: judging whether the target object is matched with the deployment condition of any AR element in the AR model;
and finishing the deployment of the corresponding AR elements according to the matched deployment conditions.
Optionally, the method further comprises: recognizing the posture of the target object;
the deploying at least some AR elements of the AR model in the video stream further comprises: judging whether the posture of the target object is matched with the deployment condition of any one AR element in the AR model and/or judging whether the posture change of the target object is matched with the deployment condition of any one AR element;
and finishing the deployment of the corresponding AR elements according to the matched deployment conditions.
Optionally, the deployment condition of the AR element includes one or more of:
deploying undeployed AR elements;
deleting the deployed AR elements;
the presentation state of the deployed AR elements is changed.
Optionally, the completing the deployment of the corresponding AR element according to the matched deployment condition includes:
determining deployment parameters in the matched deployment conditions according to one or more of speed, amplitude and type of the attitude change of the target object, and/or determining deployment parameters in the matched deployment conditions according to the attitude of the target object.
Optionally, the method further comprises:
firstly, displaying the video stream on the display interface;
the taking the plane as a reference plane of the AR model comprises the following steps: when a plurality of identified planes exist, responding to a selection instruction on a display interface, determining a plane closest to the selection instruction, and taking the closest plane as a reference plane of the AR model.
According to another aspect of the present invention, there is provided an apparatus for implementing augmented reality AR, including:
the acquisition unit is suitable for acquiring the video stream acquired by the camera;
an identifying unit adapted to identify a plane from the video stream;
and the AR unit is suitable for deploying at least part of AR elements in the AR model in the video stream by taking the plane as a reference plane of the AR model, generating the AR video stream and displaying the AR video stream on a display interface.
Optionally, the identification unit is further adapted to identify an object from the video stream;
and the AR unit is suitable for judging whether the target object is matched with the deployment condition of any one AR element in the AR model or not and finishing the deployment of the corresponding AR element according to the matched deployment condition.
Optionally, the recognition unit is further adapted to recognize a gesture of the target object;
the AR unit is suitable for judging whether the posture of the target object is matched with the deployment condition of any one AR element in the AR model or not, and/or is suitable for judging whether the posture change of the target object is matched with the deployment condition of any one AR element or not, and the corresponding AR elements are deployed according to the matched deployment condition.
Optionally, the deployment condition of the AR element includes one or more of:
deploying undeployed AR elements;
deleting the deployed AR elements;
the presentation state of the deployed AR elements is changed.
Optionally, the AR unit is adapted to determine the deployment parameters in the matched deployment conditions according to one or more of a speed, an amplitude, and a type of the attitude change of the target object, and/or adapted to determine the deployment parameters in the matched deployment conditions according to the attitude of the target object.
Optionally, the AR unit is further adapted to display the video stream on the display interface, and when there are a plurality of identified planes, determine, in response to a selection instruction on the display interface, a plane closest to the selection instruction, and use the closest plane as a reference plane of the AR model.
According to a further aspect of the invention, there is provided a computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement a method as in any above.
According to the technical scheme, after the video stream acquired by the camera is acquired, the plane is identified as the reference plane of the AR model, and at least part of AR elements of the AR model are deployed in the video stream, so that the AR video stream capable of being displayed on the display interface is generated. According to the technical scheme, the plane in the real world is used as a link combined with the virtual world, so that the smart fusion of the virtual world and the real world is realized, the generated AR video stream is more natural, and the interestingness is higher.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flow chart illustrating an implementation method of an augmented reality AR according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an apparatus for implementing an augmented reality AR according to an embodiment of the present invention;
fig. 3 shows a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a flowchart of a method for implementing an augmented reality AR according to an embodiment of the present invention. As shown in fig. 1, the method includes:
and step S110, acquiring the video stream collected by the camera.
Taking a mobile phone as an example of a device for running the AR application, a camera of the mobile phone collects video streams as real world data.
In step S120, a plane is identified from the video stream.
For example, a horizontal plane such as a floor, a table, or the like, or a vertical plane such as a wall, a mirror, or the like, a slide, a slope, or the like.
And step S130, deploying at least part of AR elements in the AR model in the video stream by taking the plane as a reference plane of the AR model, generating the AR video stream and displaying the AR video stream on a display interface.
An AR model is three-dimensional, and then the x axis, the y axis and the z axis of the AR model form three reference planes in pairs, and the identified plane can be coincided with any one of the reference planes, so that the fusion of the virtual world and the real world is realized.
As can be seen, in the method shown in fig. 1, after the video stream acquired by the camera is acquired, a plane is identified as a reference plane of the AR model, and at least part of AR elements of the AR model are deployed in the video stream, so as to generate an AR video stream that can be displayed on a display interface. According to the technical scheme, the plane in the real world is used as a link combined with the virtual world, so that the smart fusion of the virtual world and the real world is realized, the generated AR video stream is more natural, and the interestingness is higher.
In an embodiment of the present invention, the method further includes: identifying a target object from the video stream; deploying at least part of the AR elements in the AR model in the video stream comprises: judging whether the target object is matched with the deployment condition of any one AR element in the AR model; and finishing the deployment of the corresponding AR elements according to the matched deployment conditions.
For example the following scenarios: the AR model is a set of fountains, including sixteen water jets, and then the fountains are deployed outside the door after the open door is identified from the video stream, but only eight of them are deployed, depending on the door size constraints.
In an embodiment of the present invention, the method further includes: recognizing the posture of the target object; deploying at least a portion of the AR elements in the AR model in the video stream further comprises: judging whether the posture of the target object is matched with the deployment condition of any one AR element in the AR model and/or judging whether the posture change of the target object is matched with the deployment condition of any one AR element; and finishing the deployment of the corresponding AR elements according to the matched deployment conditions.
For example, if the target object is a human being, and "aircraft carrier style" is put out, the AR model, AR elements in the aircraft carrier, such as fighters, decks, etc., is deployed in the video stream. This is an AR element that determines deployment from a stationary pose. For another example, when the user swings left and right with both hands holding a stick-shaped object, two AR elements of a flag pole and a flag surface of the AR model are deployed in the video stream, specifically, the stick-shaped object is covered with the flag pole, and the flag surface swings with the swinging of both hands of the user.
In an embodiment of the present invention, in the above method, the deployment condition of the AR element includes one or more of the following: deploying undeployed AR elements; deleting the deployed AR elements; the presentation state of the deployed AR elements is changed.
For example, a user holds a normal hat, which is replaced with a magic hat in the AR video stream. When the user turns the hat for the first time, the hat is empty. When the user puts the hat on the head and takes it off again (posture change occurs), the hat is turned over again, and a rabbit is drilled from the inside — to achieve this, the AR element of the rabbit needs to be deployed subsequently. The user presses the rabbit back to the cap, blows a breath (changes posture), and the rabbit disappears when the cap is turned over again — to achieve this effect, the AR element of the rabbit needs to be deleted. For another example, a large and incomparable pillar is deployed on the ground, and when the user performs an action of pushing the pillar, the pillar is slowly inclined, which is an example of changing the display state of the deployed AR element.
In an embodiment of the present invention, in the above method, completing the deployment of the corresponding AR element according to the matched deployment condition includes: determining deployment parameters in the matched deployment conditions according to one or more of speed, amplitude and type of the attitude change of the target object, and/or determining deployment parameters in the matched deployment conditions according to the attitude of the target object.
For example, an AR element of playing cards is deployed on a desktop in front of the user. When a user takes a table, the playing cards fly and stand still in the air, and when the effect is achieved, the speed and the height of the flying playing cards can be determined according to the speed and the amplitude of the user taking the table. Yet another effect achieved from this AR element of playing cards may be the following example: when a user takes the playing cards to carry out fancy shuffling, a gorgeous shuffling effect is shown, which is different from the effect described above, but is realized on the basis of the same AR element, that is, the arrangement parameters in the arrangement conditions of the AR element are changed according to different types of font changes of the target object (person), and the AR effect is shown to be different.
The deployment parameters in the matched deployment conditions are determined according to the attitude of the target object, so that the deployment parameters are easy to understand, and the position, the take-off angle and the like of the fighter plane need to be determined according to the attitude of the user by taking the front 'aircraft carrier' as an example.
In an embodiment of the present invention, the method further includes: firstly, displaying a video stream on a display interface; the reference plane taking the plane as the AR model comprises the following steps: when a plurality of identified planes exist, a plane closest to a selection instruction is determined in response to the selection instruction on the display interface, and the closest plane is used as a reference plane of the AR model.
Sometimes, multiple planes, such as a desktop and a ground, are identified in a video stream, and if the effect of fixing a flag on a plane is to be achieved, a user needs to specifically place the flag on the desktop or the ground, at this time, the video stream is displayed on a display interface first, the user can click on the displayed desktop or the ground, and the background automatically determines the corresponding plane as the plane on which the flag is placed according to the click of the user.
Fig. 2 is a schematic structural diagram of an apparatus for implementing an augmented reality AR according to an embodiment of the present invention. As shown in fig. 2, anapparatus 200 for implementing an augmented reality AR includes:
the obtainingunit 210 is adapted to obtain a video stream collected by a camera.
Taking a mobile phone as an example of a device for running the AR application, a camera of the mobile phone collects video streams as real world data.
The identifyingunit 220 is adapted to identify a plane from the video stream.
For example, a horizontal plane such as a floor, a table, or the like, or a vertical plane such as a wall, a mirror, or the like, a slide, a slope, or the like.
And theAR unit 230 is adapted to deploy at least part of AR elements in the AR model in the video stream by using the plane as a reference plane of the AR model, generate an AR video stream, and display the AR video stream on the display interface.
An AR model is three-dimensional, and then the x axis, the y axis and the z axis of the AR model form three reference planes in pairs, and the identified plane can be coincided with any one of the reference planes, so that the fusion of the virtual world and the real world is realized.
As can be seen, in the apparatus shown in fig. 2, through the mutual cooperation of the units, after the video stream acquired by the camera is acquired, a plane is identified as a reference plane of the AR model, and at least part of AR elements of the AR model are deployed in the video stream, so as to generate an AR video stream that can be displayed on a display interface. According to the technical scheme, the plane in the real world is used as a link combined with the virtual world, so that the smart fusion of the virtual world and the real world is realized, the generated AR video stream is more natural, and the interestingness is higher.
In an embodiment of the present invention, in the above apparatus, the identifyingunit 220 is further adapted to identify the target object from the video stream; theAR unit 230 is adapted to determine whether the target object matches the deployment condition of any one of the AR elements in the AR model, and complete the deployment of the corresponding AR element according to the matched deployment condition.
For example the following scenarios: the AR model is a set of fountains, including sixteen water jets, and then the fountains are deployed outside the door after the open door is identified from the video stream, but only eight of them are deployed, depending on the door size constraints.
In an embodiment of the present invention, in the above apparatus, therecognition unit 220 is further adapted to recognize a gesture of the target object; theAR unit 230 is adapted to determine whether the posture of the target object matches the deployment condition of any one of the AR elements in the AR model, and/or determine whether the posture change of the target object matches the deployment condition of any one of the AR elements, and complete the deployment of the corresponding AR element according to the matched deployment condition.
For example, if the target object is a human being, and "aircraft carrier style" is put out, the AR model, AR elements in the aircraft carrier, such as fighters, decks, etc., is deployed in the video stream. This is an AR element that determines deployment from a stationary pose. For another example, when the user swings left and right with both hands holding a stick-shaped object, two AR elements of a flag pole and a flag surface of the AR model are deployed in the video stream, specifically, the stick-shaped object is covered with the flag pole, and the flag surface swings with the swinging of both hands of the user.
In an embodiment of the present invention, in the above apparatus, the deployment condition of the AR element includes one or more of: deploying undeployed AR elements; deleting the deployed AR elements; the presentation state of the deployed AR elements is changed.
For example, a user holds a normal hat, which is replaced with a magic hat in the AR video stream. When the user turns the hat for the first time, the hat is empty. When the user puts the hat on the head and takes it off again (posture change occurs), the hat is turned over again, and a rabbit is drilled from the inside — to achieve this, the AR element of the rabbit needs to be deployed subsequently. The user presses the rabbit back to the cap, blows a breath (changes posture), and the rabbit disappears when the cap is turned over again — to achieve this effect, the AR element of the rabbit needs to be deleted. For another example, a large and incomparable pillar is deployed on the ground, and when the user performs an action of pushing the pillar, the pillar is slowly inclined, which is an example of changing the display state of the deployed AR element.
In one embodiment of the present invention, in the above apparatus, theAR unit 230 is adapted to determine the deployment parameters in the matched deployment conditions according to one or more of speed, amplitude and type of the attitude change of the target object, and/or adapted to determine the deployment parameters in the matched deployment conditions according to the attitude of the target object.
For example, an AR element of playing cards is deployed on a desktop in front of the user. When a user takes a table, the playing cards fly and stand still in the air, and when the effect is achieved, the speed and the height of the flying playing cards can be determined according to the speed and the amplitude of the user taking the table. Yet another effect achieved from this AR element of playing cards may be the following example: when a user takes the playing cards to carry out fancy shuffling, a gorgeous shuffling effect is shown, which is different from the effect described above, but is realized on the basis of the same AR element, that is, the arrangement parameters in the arrangement conditions of the AR element are changed according to different types of font changes of the target object (person), and the AR effect is shown to be different.
The deployment parameters in the matched deployment conditions are determined according to the attitude of the target object, so that the deployment parameters are easy to understand, and the position, the take-off angle and the like of the fighter plane need to be determined according to the attitude of the user by taking the front 'aircraft carrier' as an example.
In an embodiment of the present invention, in the above apparatus, theAR unit 230 is adapted to display the video stream on the display interface, and when there are multiple identified planes, determine a plane closest to the selection instruction in response to the selection instruction on the display interface, and use the closest plane as a reference plane of the AR model.
Sometimes, multiple planes, such as a desktop and a ground, are identified in a video stream, and if the effect of fixing a flag on a plane is to be achieved, a user needs to specifically place the flag on the desktop or the ground, at this time, the video stream is displayed on a display interface first, the user can click on the displayed desktop or the ground, and the background automatically determines the corresponding plane as the plane on which the flag is placed according to the click of the user.
In summary, according to the technical solution of the present invention, after the video stream acquired by the camera is acquired, a plane is identified as a reference plane of the AR model, and at least part of AR elements of the AR model are deployed in the video stream, so as to generate an AR video stream that can be displayed on a display interface. According to the technical scheme, the plane in the real world is used as a link combined with the virtual world, so that the smart fusion of the virtual world and the real world is realized, the generated AR video stream is more natural, and the interestingness is higher.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in an implementation of an augmented reality AR according to an embodiment of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
Fig. 3 shows a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention. The computerreadable storage medium 300 stores computerreadable program code 310 for performing the steps of the method according to the invention, for example program code readable by a processor of an electronic device, which when executed by the electronic device causes the electronic device to perform the steps of the method described above, in particular program code stored by the computer readable storage medium may perform the method shown in any of the embodiments described above. The program code may be compressed in a suitable form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The embodiment of the invention discloses A1 and an implementation method of AR, comprising the following steps:
acquiring a video stream acquired by a camera;
identifying a plane from the video stream;
and deploying at least part of AR elements in the AR model in the video stream by taking the plane as a reference plane of the AR model, generating the AR video stream and displaying the AR video stream on a display interface.
A2, the method of a1, wherein the method further comprises:
identifying a target object from the video stream;
the deploying at least some AR elements in the AR model in the video stream comprises: judging whether the target object is matched with the deployment condition of any AR element in the AR model;
and finishing the deployment of the corresponding AR elements according to the matched deployment conditions.
A3, the method of a2, wherein the method further comprises: recognizing the posture of the target object;
the deploying at least some AR elements of the AR model in the video stream further comprises: judging whether the posture of the target object is matched with the deployment condition of any one AR element in the AR model and/or judging whether the posture change of the target object is matched with the deployment condition of any one AR element;
and finishing the deployment of the corresponding AR elements according to the matched deployment conditions.
A4, the method of a2 or A3, wherein the deployment conditions of the AR element include one or more of:
deploying undeployed AR elements;
deleting the deployed AR elements;
the presentation state of the deployed AR elements is changed.
A5, the method as in A3, wherein the completing the deployment of the corresponding AR elements according to the matched deployment conditions comprises:
determining deployment parameters in the matched deployment conditions according to one or more of speed, amplitude and type of the attitude change of the target object, and/or determining deployment parameters in the matched deployment conditions according to the attitude of the target object.
A6, the method of a1, wherein the method further comprises:
firstly, displaying the video stream on the display interface;
the taking the plane as a reference plane of the AR model comprises the following steps: when a plurality of identified planes exist, responding to a selection instruction on a display interface, determining a plane closest to the selection instruction, and taking the closest plane as a reference plane of the AR model.
The embodiment of the invention also discloses B7 and an implementation device of AR, comprising:
the acquisition unit is suitable for acquiring the video stream acquired by the camera;
an identifying unit adapted to identify a plane from the video stream;
and the AR unit is suitable for deploying at least part of AR elements in the AR model in the video stream by taking the plane as a reference plane of the AR model, generating the AR video stream and displaying the AR video stream on a display interface.
B8, the device of B7, wherein,
the identification unit is further adapted to identify a target object from the video stream;
and the AR unit is suitable for judging whether the target object is matched with the deployment condition of any one AR element in the AR model or not and finishing the deployment of the corresponding AR element according to the matched deployment condition.
B9, the device of B8, wherein,
the recognition unit is further suitable for recognizing the gesture of the target object;
the AR unit is suitable for judging whether the posture of the target object is matched with the deployment condition of any one AR element in the AR model or not, and/or is suitable for judging whether the posture change of the target object is matched with the deployment condition of any one AR element or not, and the corresponding AR elements are deployed according to the matched deployment condition.
B10, the apparatus of B8 or B9, wherein the deployment conditions of the AR element include one or more of:
deploying undeployed AR elements;
deleting the deployed AR elements;
the presentation state of the deployed AR elements is changed.
B11, the device of B9, wherein,
the AR unit is suitable for determining deployment parameters in the matched deployment conditions according to one or more of speed, amplitude and type of the posture change of the target object, and/or suitable for determining deployment parameters in the matched deployment conditions according to the posture of the target object.
B12, the device of B7, wherein,
the AR unit is further adapted to display the video stream on the display interface, when a plurality of identified planes exist, determine a plane closest to a selection instruction in response to the selection instruction on the display interface, and use the closest plane as a reference plane of the AR model.
Embodiments of the present invention also disclose C13, a computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement the method as described in any of a1-a 6.

Claims (11)

CN201711483513.2A2017-12-292017-12-29Method and device for realizing augmented reality AR and computer readable storage mediumActiveCN108090968B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201711483513.2ACN108090968B (en)2017-12-292017-12-29Method and device for realizing augmented reality AR and computer readable storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201711483513.2ACN108090968B (en)2017-12-292017-12-29Method and device for realizing augmented reality AR and computer readable storage medium

Publications (2)

Publication NumberPublication Date
CN108090968A CN108090968A (en)2018-05-29
CN108090968Btrue CN108090968B (en)2022-01-25

Family

ID=62181267

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201711483513.2AActiveCN108090968B (en)2017-12-292017-12-29Method and device for realizing augmented reality AR and computer readable storage medium

Country Status (1)

CountryLink
CN (1)CN108090968B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109002813B (en)*2018-08-172022-05-27浙江大丰实业股份有限公司Stage fountain blockage state analysis system
CN110009952A (en)*2019-04-122019-07-12上海乂学教育科技有限公司Adaptive learning mobile terminal and learning method based on augmented reality
CN110390730B (en)*2019-07-052023-12-29北京悉见科技有限公司Method for arranging augmented reality object and electronic equipment
CN111475026B (en)*2020-04-102023-08-22李斌Spatial positioning method based on mobile terminal application augmented virtual reality technology
CN114900722B (en)*2022-05-062024-12-20浙江工商大学 Personalized advertising implantation method and system based on AR technology

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102157011A (en)*2010-12-102011-08-17北京大学Method for carrying out dynamic texture acquisition and virtuality-reality fusion by using mobile shooting equipment
CN104740869A (en)*2015-03-262015-07-01北京小小牛创意科技有限公司True environment integrated and virtuality and reality combined interaction method and system
CN105759960A (en)*2016-02-022016-07-13上海尚镜信息科技有限公司Augmented reality remote guidance method and system in combination with 3D camera
CN106341720A (en)*2016-08-182017-01-18北京奇虎科技有限公司Method for adding face effects in live video and device thereof
CN107025662A (en)*2016-01-292017-08-08成都理想境界科技有限公司A kind of method for realizing augmented reality, server, terminal and system
CN107358656A (en)*2017-06-162017-11-17珠海金山网络游戏科技有限公司The AR processing systems and its processing method of a kind of 3d gaming

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20170329402A1 (en)*2014-03-172017-11-16Spatial Intelligence LlcStereoscopic display

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102157011A (en)*2010-12-102011-08-17北京大学Method for carrying out dynamic texture acquisition and virtuality-reality fusion by using mobile shooting equipment
CN104740869A (en)*2015-03-262015-07-01北京小小牛创意科技有限公司True environment integrated and virtuality and reality combined interaction method and system
CN107025662A (en)*2016-01-292017-08-08成都理想境界科技有限公司A kind of method for realizing augmented reality, server, terminal and system
CN105759960A (en)*2016-02-022016-07-13上海尚镜信息科技有限公司Augmented reality remote guidance method and system in combination with 3D camera
CN106341720A (en)*2016-08-182017-01-18北京奇虎科技有限公司Method for adding face effects in live video and device thereof
CN107358656A (en)*2017-06-162017-11-17珠海金山网络游戏科技有限公司The AR processing systems and its processing method of a kind of 3d gaming

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Augmented reality using holographic display;Hung-Chun Lin,et al.;《Optical Data Processing and Storage》;20170112;第3卷;第101-106页*
具有高融合度的城市场景移动增强现实技术研究;葛林;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160715(第7期);第14、19、29页*
基于增强现实的产品展示APP设计研究;滕健 等;《包装工程》;20170731;第38卷(第14期);第219-223页*

Also Published As

Publication numberPublication date
CN108090968A (en)2018-05-29

Similar Documents

PublicationPublication DateTitle
CN108090968B (en)Method and device for realizing augmented reality AR and computer readable storage medium
US11430192B2 (en)Placement and manipulation of objects in augmented reality environment
CN111641844B (en)Live broadcast interaction method and device, live broadcast system and electronic equipment
JP6276882B1 (en) Information processing method, apparatus, and program for causing computer to execute information processing method
CN105981076B (en)Synthesize the construction of augmented reality environment
CN110716645A (en)Augmented reality data presentation method and device, electronic equipment and storage medium
JP6392911B2 (en) Information processing method, computer, and program for causing computer to execute information processing method
CN111640202B (en)AR scene special effect generation method and device
CN111627117B (en)Image display special effect adjusting method and device, electronic equipment and storage medium
CN111639613B (en)Augmented reality AR special effect generation method and device and electronic equipment
CN108416832B (en)Media information display method, device and storage medium
JP6224327B2 (en) Information processing system, information processing apparatus, information processing method, and information processing program
WO2018142756A1 (en)Information processing device and information processing method
CN111640200B (en)AR scene special effect generation method and device
CN108986227B (en)Particle special effect program file package generation method and device and particle special effect generation method and device
JP6730461B2 (en) Information processing system and information processing apparatus
CN108563327B (en) Augmented reality method, device, storage medium and electronic device
CN111667588A (en)Person image processing method, person image processing device, AR device and storage medium
CN111249729A (en)Game role display method and device, electronic equipment and storage medium
CN111638798A (en)AR group photo method, AR group photo device, computer equipment and storage medium
CN113178017A (en)AR data display method and device, electronic equipment and storage medium
CN109200586A (en)Game implementation method and device based on augmented reality
JP6554139B2 (en) Information processing method, apparatus, and program for causing computer to execute information processing method
JP5864789B1 (en) Railway model viewing device, method, program, dedicated display monitor, scene image data for composition
CN106408666A (en)Mixed reality demonstration method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp