Movatterモバイル変換


[0]ホーム

URL:


CN111818265B - Interaction method and device based on augmented reality model, electronic equipment and medium - Google Patents

Interaction method and device based on augmented reality model, electronic equipment and medium
Download PDF

Info

Publication number
CN111818265B
CN111818265BCN202010687795.3ACN202010687795ACN111818265BCN 111818265 BCN111818265 BCN 111818265BCN 202010687795 ACN202010687795 ACN 202010687795ACN 111818265 BCN111818265 BCN 111818265B
Authority
CN
China
Prior art keywords
augmented reality
image
reality model
model
adjusting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010687795.3A
Other languages
Chinese (zh)
Other versions
CN111818265A (en
Inventor
张璟聪
谭盈
李云珠
刘佳成
罗琳捷
刘晶
杨骁�
陈志立
王国晖
杨建朝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Original Assignee
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd, ByteDance IncfiledCriticalBeijing ByteDance Network Technology Co Ltd
Priority to CN202010687795.3ApriorityCriticalpatent/CN111818265B/en
Publication of CN111818265ApublicationCriticalpatent/CN111818265A/en
Application grantedgrantedCritical
Publication of CN111818265BpublicationCriticalpatent/CN111818265B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本公开实施例公开了一种基于增强现实模型的交互方法、装置、电子设备及介质。该方法包括:获取第一图像,确定与所述第一图像对应的增强现实模型;通过顶点着色器调整所述增强现实模型的朝向信息,以使所述增强现实模型的设定表面朝向拍摄者;叠加调整后的增强现实模型到所述第一图像得到第二图像,显示所述第二图像。本公开实施例在将增强现实模型叠加到第一图像之前,通过顶点着色器调整增强现实模型的朝向信息,可以实现叠加后的第二图像中,增强现实模型的设定表面一直朝向拍摄者,从而呈现增强现实模型跟随拍摄者转动的效果,丰富了所拍摄的图像的显示效果,解决了目前拍摄场景中显示效果比较单一的问题。

Figure 202010687795

Embodiments of the present disclosure disclose an augmented reality model-based interaction method, apparatus, electronic device, and medium. The method includes: acquiring a first image, and determining an augmented reality model corresponding to the first image; adjusting the orientation information of the augmented reality model through a vertex shader, so that the set surface of the augmented reality model faces the photographer and superimposing the adjusted augmented reality model on the first image to obtain a second image, and displaying the second image. In this embodiment of the present disclosure, before the augmented reality model is superimposed on the first image, the vertex shader is used to adjust the orientation information of the augmented reality model, so that in the superimposed second image, the set surface of the augmented reality model always faces the photographer, Thereby, the effect of the augmented reality model being rotated following the photographer is presented, the display effect of the captured image is enriched, and the problem that the display effect in the current shooting scene is relatively simple is solved.

Figure 202010687795

Description

Interaction method and device based on augmented reality model, electronic equipment and medium
Technical Field
The embodiment of the disclosure relates to computer technologies, and in particular, to an interaction method and apparatus based on an augmented reality model, an electronic device, and a medium.
Background
The AR (Augmented Reality) technology is a technology that can combine a real environment and virtual information to realize display of an overlay image of an AR model and an image about the real world on a screen of an intelligent terminal.
At present, video shooting through an intelligent terminal is only used for carrying out image recording on a shot object, and the intelligent terminal with the augmented reality function can only provide some simple application scenes such as background replacement, sticker addition and the like, is single in display effect and cannot meet the requirement of a user on pursuing a novel playing method.
Disclosure of Invention
The embodiment of the disclosure provides an interaction method, an interaction device, electronic equipment and a medium based on an augmented reality model, which can enrich the display effect of a shot image.
In a first aspect, an embodiment of the present disclosure provides an interaction method based on an augmented reality model, including:
acquiring a first image, and determining an augmented reality model corresponding to the first image;
adjusting, by a vertex shader, orientation information of the augmented reality model to orient a set surface of the augmented reality model towards a photographer;
and overlapping the adjusted augmented reality model to the first image to obtain a second image, and displaying the second image.
In a second aspect, an embodiment of the present disclosure further provides an interaction apparatus based on an augmented reality model, where the apparatus includes:
the image recognition module is used for acquiring a first image and determining an augmented reality model corresponding to the first image;
the model adjusting module is used for adjusting the orientation information of the augmented reality model through a vertex shader so that the set surface of the augmented reality model faces a photographer;
and the model superposition module is used for superposing the adjusted augmented reality model on the first image to obtain a second image and displaying the second image.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement an augmented reality model-based interaction method as provided by any embodiment of the present disclosure.
In a fourth aspect, the embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the interaction method based on the augmented reality model as provided in any embodiment of the present disclosure.
The embodiment of the disclosure provides an interaction method, an interaction device, electronic equipment and a storage medium based on an augmented reality model, wherein before the augmented reality model is superposed on a first image, orientation information of the augmented reality model is adjusted through a vertex shader, and in a superposed second image, a set surface of the augmented reality model always faces a photographer, so that the effect that the augmented reality model rotates along with the photographer is presented, the display effect of a shot image is enriched, the problem that the display effect is single in a current shooting scene is solved, and a novel playing method is provided to improve user experience.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a flowchart of an interaction method based on an augmented reality model according to an embodiment of the present disclosure;
fig. 2 is a flowchart of another interaction method based on an augmented reality model according to an embodiment of the present disclosure;
fig. 3 is a block diagram of an interaction apparatus based on an augmented reality model according to an embodiment of the present disclosure;
fig. 4 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Fig. 1 is a flowchart of an augmented reality model-based interaction method provided by an embodiment of the present disclosure, which may be performed by an augmented reality model-based interaction apparatus, which may be implemented by software and/or hardware and is generally disposed in an electronic device. As shown in fig. 1, the method includes:
step 110, acquiring a first image, and determining an augmented reality model corresponding to the first image.
In the embodiment of the present disclosure, the first image may be an image about the real world captured by a camera of the electronic device. For example, a plurality of frames of original images captured by a smartphone are taken as the first image.
Some objects in the real world are associated with a pre-established augmented reality model (which may be a three-dimensional model or a two-dimensional model, but is not limited thereto) in advance. For example, an image of an object in the real world may be captured, the captured image may be associated with an augmented reality model, and the image associated with the augmented reality model may be used as an image template. In the embodiment of the present disclosure, a 2D or 3D map and an augmented reality model are associated in advance, the associated augmented reality model is used as an augmented reality model corresponding to the map, and the 2D or 3D map is used as an image template. Alternatively, an augmented reality model is associated with an image including a landmark building in advance, the image having the landmark building is used as an image template, and the associated augmented reality model is used as an augmented reality model corresponding to the image including the landmark building. It should be noted that the augmented reality model may be constructed for different objects according to actual needs, and the embodiment of the present disclosure does not limit the type of the object.
Illustratively, a first image is acquired at a set period during the duration of a shooting event; judging whether the first image is matched with a preset image template or not; and if so, acquiring the augmented reality model corresponding to the first image. In the embodiment of the disclosure, each frame of the first image is acquired, the first image is matched with a preset image template, and if the similarity between the first image and the preset image template exceeds a set threshold, the first image is determined to be matched with the preset image template. It should be noted that the preset image template may be stored in a built-in repository of the electronic device, so that the preset image template can be obtained locally and quickly when matching is required. Alternatively, when matching is required, a preset image template may be requested from the server. The set threshold may be an empirical value, and is set according to different application scenarios.
In the embodiment of the present disclosure, the step of acquiring the augmented reality model corresponding to the first image includes, but is not limited to: and acquiring the augmented reality model corresponding to the first image by a resource library of the client. Or, the intelligent terminal requests the server for the augmented reality model corresponding to the first image. For example, a resource library is built in a client downloaded by the electronic device, the resource library includes some common augmented reality models, and when a new resource exists at the server, the server can issue an update notification to the client, thereby reminding the client of updating the built-in resource library. Optionally, if the user needs to download a new resource, the resources in the download list may be sorted according to the user's usage preference to preferentially display the resources that meet the user's usage preference.
In an exemplary embodiment, after the client determines that the first image matches with the preset image template, the client sends a model request to the server to obtain, by the server, an augmented reality model corresponding to the first image. Optionally, the downloaded augmented reality model may be cached locally for next use.
The setting period is a preset empirical value, and the setting periods in different shooting scenes can be the same or different. The shooting scene may be a sunrise scene, a cloudy scene, a sunny scene, a daytime scene, or a dim light scene, and the embodiment of the disclosure is not particularly limited.
Step 120, adjusting orientation information of the augmented reality model through a vertex shader, so that a set surface of the augmented reality model faces a photographer.
It should be noted that the vertex shader implements a general programmable approach to the vertex. The input data to the vertex shader consists of:
attributes: the data for each vertex is encapsulated using an array of vertices, typically for variables that vary from vertex to vertex, such as vertex position, color, etc.
Uniformity: the constant data used by the vertex shader, which cannot be modified by the shader, is typically used for variables that are the same for all vertices in a single 3D object composed of the same set of vertices, such as the location of the current light source.
Samplers: this is optional, a special unifonns, which represents the texture used by the vertex shader.
Shader program: the source code or executable file of the vertex shader describes the operations to be performed on the vertices.
The output data of the vertex Shader is a variable that is linearly interpolated in the rasterization processing stage, and the calculation result is used as the input data of a Fragment Shader (Fragment Shader). Where a linear difference is the mechanism that generates a varying value for each fragment based on the original varying value assigned to each vertex.
In the embodiment of the present disclosure, the orientation information of the augmented reality model may be orientation information of a setting surface of the augmented reality model. Wherein the set surface may be one or more surfaces of a pre-specified augmented reality model.
Illustratively, adjusting orientation information of the augmented reality model by the vertex shader may include the steps of: and acquiring a shooting angle of the first image, and adjusting orientation information of the augmented reality model through a vertex shader according to the shooting angle so as to enable the set surface of the augmented reality model to face a photographer.
There are many ways to obtain the shooting angle of the first image, and the embodiment of the disclosure is not particularly limited. For example, the shooting angle of the first image may be determined from the position change information of the camera that shoots the first image. Or, the shooting angle of the first image and the like are calculated according to data collected by the image sensor.
The shooting angle is used for indicating the position of a shooting person relative to a shot object, and a first image is obtained by shooting a shot image. For example, the photographing angle may be 30 ° east of the subject, or south of the subject, or west of the subject, or the like.
Optionally, obtaining a shooting angle of the first image, and adjusting, by the vertex shader, orientation information of the augmented reality model according to the shooting angle may include: the method comprises the steps of obtaining a shooting angle of a first image, determining a rotation angle of a vertex of a set surface of the augmented reality model according to the shooting angle, adjusting the vertex of the set surface through a vertex shader according to the rotation angle, achieving adjustment of the orientation of the augmented reality model, and achieving the effect that the set surface of the augmented reality model always faces a photographer.
And step 130, overlapping the adjusted augmented reality model to the first image to obtain a second image, and displaying the second image.
Illustratively, a preset superposition rule is obtained, a superposition region of the augmented reality model corresponding to each frame of the first image is determined according to the superposition rule, the augmented reality model is superposed on the superposition region respectively to obtain a plurality of frames of second images, and the second images are displayed according to a set sequence. Wherein the set order may be an acquisition order of the first images. Alternatively, the order of generation of the second image is used. Alternatively, other custom sequences are possible, and the disclosed embodiments are not limited in this respect.
It should be noted that the superimposition rule is used to indicate that the augmented reality model corresponds to the superimposition manner and the superimposition position in each frame of the first image to be superimposed. Wherein, the superposition mode includes zooming, color changing and/or moving, etc. Specifically, the second image is played, and the process of overlaying the augmented reality model to the first image is presented, and in the overlaying process, the augmented reality model displays the zooming effect. And/or playing the second image and displaying the texture and color transformation effect of the augmented reality model in the superposition process. And/or playing the second image, displaying the movement process of the augmented reality in the superposition process, and the like. The overlay position is used for the augmented reality model corresponding to the spatial position in each frame of the first image. The superimposition position may be a coordinate of a vertex of the augmented reality model in the first image, or may be a position relative to a setting object in the first image.
It should be noted that, there are many ways to superimpose the augmented reality model onto the superimposition region in the first image, and the embodiment of the present disclosure is not particularly limited. An exemplary implementation manner is to render the adjusted augmented reality model to the second layer, and the transparency of the region in the second layer except for the augmented reality model is zero, that is, the region in the second layer except for the augmented reality model is a transparent region. And the layer where the first image is located is a first layer, and the second layer and the first layer are synthesized to realize the superposition of the augmented reality model to the superposition area in the first image. Another exemplary implementation manner is to remove the pixel point of the superimposed region in the first layer, fill the adjusted pixel point of the augmented reality model into the superimposed region in the first layer, and superimpose the augmented reality model onto the superimposed region in the first image.
According to the technical scheme, before the augmented reality model is superposed on the first image, orientation information of the augmented reality model is adjusted through the vertex shader, the situation that the second image after superposition is formed can be achieved, the set surface of the augmented reality model always faces a photographer, the effect that the augmented reality model rotates along with the photographer is achieved, the display effect of the photographed image is enriched, the problem that the display effect is single in the current shooting scene is solved, and a novel playing method is provided to improve user experience.
In one exemplary embodiment, a reference augmented reality model corresponding to a first image is obtained; and acquiring image data to be displayed, and adjusting texture information of a set surface of the reference augmented reality model according to the image data to obtain the augmented reality model corresponding to the first image. The image data includes video data, pictures, and the like. For example, the video data may be short video data, live video data, or long video data, and so on.
Illustratively, the step of acquiring a reference augmented reality model corresponding to the first image may comprise: and identifying the first image, determining a set object in the first image according to the identification result, and acquiring a reference augmented reality model corresponding to the set object. Wherein the set object is an object corresponding to at least one augmented reality model. The step of acquiring the reference augmented reality model corresponding to the set object includes, but is not limited to: and acquiring the reference augmented reality model corresponding to the set object by a resource library of the client. Alternatively, the electronic device requests the server for a reference augmented reality model corresponding to the set object.
The step of obtaining image data to be displayed and adjusting texture information of a set surface of the reference augmented reality model according to the image data may include: and the resource server acquires video data or image data to be displayed of a picture, and adjusts the texture information of the set surface of the reference augmented reality model according to the image data to obtain the augmented reality model corresponding to the first image.
For example, if the first image is recognized as a world map, objects such as continents, oceans, or mountains included in the world map are determined, reference augmented reality models corresponding to the objects are respectively obtained, the image data to be displayed are adopted, the texture information of the set surface of the reference augmented reality model is respectively modified by adopting different image data, then the augmented reality models are displayed to pop up from regions (including continents, oceans, mountains, or the like) of the world map, and each augmented reality model displays the effect of different image data.
According to the technical scheme of the embodiment of the disclosure, the effect of displaying the image data by taking the augmented reality model as a medium is realized by acquiring the reference augmented reality model corresponding to the first image and adjusting the texture information of the set surface of the reference augmented reality model according to the image data to be displayed, and the display effect of the shot image is enriched.
Fig. 2 is a flowchart of another interaction method based on an augmented reality model according to an embodiment of the present disclosure, where the method includes:
step 210, obtaining a first image, and determining an augmented reality model corresponding to the first image.
Step 220, adjusting orientation information of the augmented reality model through a vertex shader, so that a set surface of the augmented reality model faces a photographer.
And step 230, acquiring a preset model motion track.
It should be noted that the model motion trajectory is used to indicate the position information and the depth information of the augmented reality model in each frame of the first image. In the embodiment of the present disclosure, the position information may be coordinate information of a vertex in the augmented reality model, or other information representing a position of a pixel point in the augmented reality model. The depth information may be depth information of a vertex in the augmented reality model, or other information indicating how far and near a pixel point in the augmented reality model is from the photographer. In the process that the distance between the augmented reality model and the photographer is from far to near, the augmented reality model can show the amplification effect that the size is changed from small to big. Accordingly, in the process that the distance between the augmented reality model and the photographer is from near to far, the augmented reality model can show a shrinking effect that the size is reduced from large to small.
The model motion trajectory is preset, and after the augmented reality model is built, the model motion trajectory is associated with the built augmented reality model. In order to be able to display an augmented reality model on a display screen of an electronic device, the augmented reality model is processed based on a specific transformation matrix, which may be transformed from a model coordinate system to screen coordinates. The method is adopted to process the augmented reality model corresponding to each position on the preset model motion track, and position information and depth information of the augmented reality model in each frame of first image are obtained.
Optionally, when the server obtains the augmented reality model, the client may download data corresponding to a model motion trajectory associated with the augmented reality model, and store the data in the built-in resource library. Or, storing identification information of the model motion trail associated with the augmented reality model in a built-in resource library of the client, so that when the augmented reality model needs to be used, the server acquires data corresponding to the model motion trail according to the identification information.
Illustratively, according to the acquired augmented reality model, a model motion track corresponding to the augmented reality model is determined, and data corresponding to the model motion track is acquired.
And step 240, adjusting the augmented reality model through a vertex shader according to the position information and the depth information.
Illustratively, the augmented reality model is resized by the vertex shader according to the depth information. For example, the depth information of the vertex of the augmented reality model corresponding to each position in the model motion trajectory is obtained, the depth information of the vertex of the augmented reality model corresponding to each frame of the first image is determined according to the corresponding relation between the position in the model motion trajectory and the first image, the attribute information of the vertex of the augmented reality model is adjusted through the vertex shader based on the depth information, and then the size of the augmented reality model is adjusted, so that the augmented reality model is zoomed. The attribute information includes, but is not limited to, location information and quantity information.
Illustratively, the spatial position of the augmented reality model is adjusted by the vertex shader according to the position information. For example, position information of a vertex of the augmented reality model corresponding to each position in the model motion trajectory is obtained, a spatial position of the vertex of the augmented reality model corresponding to each frame of the first image is determined according to a corresponding relation between the position in the model motion trajectory and the first image, and the position of the vertex of the augmented reality model is adjusted through the vertex shader based on the spatial position, so that the spatial position of the augmented reality model is adjusted.
And step 250, superimposing the adjusted augmented reality model on the first image according to the spatial position to obtain a second image.
Illustratively, an overlay region is determined according to the spatial position, and the adjusted augmented reality model is overlaid on the first image based on the overlay region to obtain a second image.
Step 260, rendering the second image to a display interface to display a motion process of the augmented reality model, wherein a preset surface of the augmented reality model faces a photographer in the motion process.
For example, a video of the course of motion of the augmented reality model may be presented by sequentially rendering multiple frames of the second image to the display interface. And, during the motion, the preset surface of the augmented reality model is always directed to the photographer. Alternatively, image data such as video or pictures may be displayed on the setting surface.
According to the technical scheme, the augmented reality model is adjusted through the vertex shader by obtaining the model motion trail of the augmented reality model according to the position information and the depth information in the model motion trail, and the adjusted augmented reality model is superposed on the first image to obtain the second image. Because the depth of field and the spatial position of the vertex of the augmented reality model are considered in the superposition process of the augmented reality model and the first image, the augmented reality model can move according to the motion track of the model, the effect of scaling the model is accompanied in the motion, and the image display effect is enriched.
Fig. 3 is a block diagram of an interaction apparatus based on an augmented reality model according to an embodiment of the present disclosure. The apparatus may be implemented by software and/or hardware, and is generally integrated in an electronic device, so as to enrich the display effect of a captured image by performing the augmented reality model-based interaction method according to the embodiment of the present disclosure. As shown in fig. 3, the apparatus includes:
animage obtaining module 310, configured to obtain a first image, and determine an augmented reality model corresponding to the first image;
amodel adjusting module 320, configured to adjust, by the vertex shader, orientation information of the augmented reality model so that a set surface of the augmented reality model faces a photographer;
and amodel superimposing module 330, configured to superimpose the adjusted augmented reality model on the first image to obtain a second image, and display the second image.
The interaction device based on the augmented reality model provided by the embodiment of the disclosure is configured to implement the interaction method based on the augmented reality model, and the implementation principle and the technical effect of the interaction device based on the augmented reality model are similar to those of the interaction method based on the augmented reality model, and are not repeated here.
Fig. 4 is a block diagram of an electronic device according to an embodiment of the present disclosure. Referring now to FIG. 4, a block diagram of anelectronic device 400 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, theelectronic device 400 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from astorage device 406 into a Random Access Memory (RAM) 403. In theRAM 403, various programs and data necessary for the operation of theelectronic apparatus 400 are also stored. Theprocessing device 401, theROM 402, and theRAM 403 are connected to each other via abus 404. An input/output (I/O)interface 405 is also connected tobus 404.
Generally, the following devices may be connected to the I/O interface 405:input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; anoutput device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like;storage devices 406 including, for example, magnetic tape, hard disk, etc.; and acommunication device 409. The communication means 409 may allow theelectronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates anelectronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 409, or from the storage means 406, or from theROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by theprocessing device 401.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a first image, and determining an augmented reality model corresponding to the first image;
adjusting, by a vertex shader, orientation information of the augmented reality model to orient a set surface of the augmented reality model towards a photographer;
and overlapping the adjusted augmented reality model to the first image to obtain a second image, and displaying the second image.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the names of the modules do not in some cases constitute a limitation of the unit itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, an interaction method based on an augmented reality model is provided, where the acquiring a first image and determining the augmented reality model corresponding to the first image includes:
acquiring a first image according to a set period in the duration of a shooting event;
judging whether the first image is matched with a preset image template;
and if so, acquiring an augmented reality model corresponding to the first image.
According to one or more embodiments of the present disclosure, the present disclosure provides an interaction method based on an augmented reality model, where the obtaining of the augmented reality model corresponding to the first image includes:
acquiring a reference augmented reality model corresponding to the first image;
and acquiring image data to be displayed, and adjusting the texture information of the set surface of the reference augmented reality model according to the image data to obtain the augmented reality model corresponding to the first image.
According to one or more embodiments of the present disclosure, there is provided an interaction method based on an augmented reality model, wherein the adjusting, by a vertex shader, orientation information of the augmented reality model to make a setting surface of the augmented reality model face a photographer includes:
and acquiring a shooting angle of the first image, and adjusting orientation information of the augmented reality model through a vertex shader according to the shooting angle so as to enable a set surface of the augmented reality model to face a photographer.
According to one or more embodiments of the present disclosure, there is provided an interaction method based on an augmented reality model, wherein after adjusting orientation information of the augmented reality model by a vertex shader, the interaction method further includes:
acquiring a preset model motion track, wherein the model motion track is used for indicating position information and depth information of the augmented reality model in each frame of the first image;
and adjusting the augmented reality model through a vertex shader according to the position information and the depth information.
According to one or more embodiments of the present disclosure, there is provided an interaction method based on an augmented reality model, wherein the adjusting the augmented reality model through a vertex shader according to the position information and the depth information includes:
adjusting the size of the augmented reality model through a vertex shader according to the depth information;
and adjusting the spatial position of the augmented reality model through a vertex shader according to the position information.
According to one or more embodiments of the present disclosure, an interaction method based on an augmented reality model is provided, where the overlaying of the adjusted augmented reality model onto the first image to obtain a second image, and displaying the second image includes:
according to the space position, overlapping the adjusted augmented reality model to the first image to obtain a second image;
rendering the second image to a display interface to display a motion process of the augmented reality model, wherein a preset surface of the augmented reality model faces a photographer in the motion process.
According to one or more embodiments of the present disclosure, the present disclosure provides an image processing apparatus, wherein the image acquisition module is specifically configured to:
acquiring a first image according to a set period in the duration of a shooting event;
judging whether the first image is matched with a preset image template;
and if so, acquiring an augmented reality model corresponding to the first image.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus, wherein the acquiring an augmented reality model corresponding to the first image includes:
acquiring a reference augmented reality model corresponding to the first image;
and acquiring image data to be displayed, and adjusting the texture information of the set surface of the reference augmented reality model according to the image data to obtain the augmented reality model corresponding to the first image.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus, wherein the model adjusting module is specifically configured to:
and acquiring a shooting angle of the first image, and adjusting orientation information of the augmented reality model through a vertex shader according to the shooting angle so as to enable a set surface of the augmented reality model to face a photographer.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus, further including:
a trajectory acquisition module, configured to acquire a preset model motion trajectory after adjusting orientation information of the augmented reality model through a vertex shader, where the model motion trajectory is used to indicate position information and depth information of the augmented reality model in each frame of the first image;
and the vertex adjusting module is used for adjusting the augmented reality model through a vertex shader according to the position information and the depth information.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus, wherein the adjusting the augmented reality model by a vertex shader according to the position information and the depth information includes:
adjusting the size of the augmented reality model through a vertex shader according to the depth information;
and adjusting the spatial position of the augmented reality model through a vertex shader according to the position information.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus, wherein the model superposition module is specifically configured to:
according to the space position, overlapping the adjusted augmented reality model to the first image to obtain a second image;
rendering the second image to a display interface to display a motion process of the augmented reality model, wherein a preset surface of the augmented reality model faces a photographer in the motion process.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. An interaction method based on an augmented reality model is characterized by comprising the following steps:
acquiring a first image, and determining an augmented reality model corresponding to the first image;
adjusting, by a vertex shader, orientation information of the augmented reality model to orient a set surface of the augmented reality model towards a photographer, the set surface being one or more surfaces of a pre-specified augmented reality model;
and overlapping the adjusted augmented reality model to the first image to obtain a second image, and displaying the second image.
2. The method of claim 1, wherein the acquiring a first image, determining an augmented reality model corresponding to the first image, comprises:
acquiring a first image according to a set period in the duration of a shooting event;
judging whether the first image is matched with a preset image template;
and if so, acquiring an augmented reality model corresponding to the first image.
3. The method of claim 2, wherein the obtaining an augmented reality model corresponding to the first image comprises:
acquiring a reference augmented reality model corresponding to the first image;
and acquiring image data to be displayed, and adjusting the texture information of the set surface of the reference augmented reality model according to the image data to obtain the augmented reality model corresponding to the first image.
4. The method of claim 1, wherein the adjusting, by the vertex shader, the orientation information of the augmented reality model to orient the set surface of the augmented reality model towards the photographer comprises:
and acquiring a shooting angle of the first image, and adjusting orientation information of the augmented reality model through a vertex shader according to the shooting angle so as to enable a set surface of the augmented reality model to face a photographer.
5. The method of claim 1, further comprising, after adjusting orientation information of the augmented reality model by a vertex shader:
acquiring a preset model motion track, wherein the model motion track is used for indicating position information and depth information of the augmented reality model in each frame of the first image;
and adjusting the augmented reality model through a vertex shader according to the position information and the depth information.
6. The method of claim 5, wherein adjusting the augmented reality model according to the position information and the depth information by a vertex shader comprises:
adjusting the size of the augmented reality model through a vertex shader according to the depth information;
and adjusting the spatial position of the augmented reality model through a vertex shader according to the position information.
7. The method of claim 6, wherein overlaying the adjusted augmented reality model onto the first image results in a second image, and displaying the second image comprises:
according to the space position, overlapping the adjusted augmented reality model to the first image to obtain a second image;
rendering the second image to a display interface to display a motion process of the augmented reality model, wherein a preset surface of the augmented reality model faces a photographer in the motion process.
8. An interaction apparatus based on an augmented reality model, comprising:
the image acquisition module is used for acquiring a first image and determining an augmented reality model corresponding to the first image;
the model adjusting module is used for adjusting the orientation information of the augmented reality model through a vertex shader so as to enable a set surface of the augmented reality model to face a photographer, wherein the set surface is one or more surfaces of a pre-specified augmented reality model;
and the model superposition module is used for superposing the adjusted augmented reality model on the first image to obtain a second image and displaying the second image.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the augmented reality model-based interaction method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out an augmented reality model-based interaction method according to any one of claims 1 to 7.
CN202010687795.3A2020-07-162020-07-16Interaction method and device based on augmented reality model, electronic equipment and mediumActiveCN111818265B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010687795.3ACN111818265B (en)2020-07-162020-07-16Interaction method and device based on augmented reality model, electronic equipment and medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010687795.3ACN111818265B (en)2020-07-162020-07-16Interaction method and device based on augmented reality model, electronic equipment and medium

Publications (2)

Publication NumberPublication Date
CN111818265A CN111818265A (en)2020-10-23
CN111818265Btrue CN111818265B (en)2022-03-04

Family

ID=72865351

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010687795.3AActiveCN111818265B (en)2020-07-162020-07-16Interaction method and device based on augmented reality model, electronic equipment and medium

Country Status (1)

CountryLink
CN (1)CN111818265B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112312111A (en)*2020-10-302021-02-02北京字节跳动网络技术有限公司Virtual image display method and device, electronic equipment and storage medium
CN113329218A (en)*2021-05-282021-08-31青岛鳍源创新科技有限公司Augmented reality combining method, device and equipment for underwater shooting and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105488840A (en)*2015-11-262016-04-13联想(北京)有限公司Information processing method and electronic equipment
CN109147054A (en)*2018-08-032019-01-04五八有限公司Setting method, device, storage medium and the terminal of the 3D model direction of AR
CN109215413A (en)*2018-09-212019-01-15福州职业技术学院A kind of mold design teaching method, system and mobile terminal based on mobile augmented reality
CN109545003A (en)*2018-12-242019-03-29北京卡路里信息技术有限公司A kind of display methods, device, terminal device and storage medium
CN110709898A (en)*2017-03-012020-01-17爱威愿景有限公司Video see-through display system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20120075433A1 (en)*2010-09-072012-03-29Qualcomm IncorporatedEfficient information presentation for augmented reality
US10430961B2 (en)*2015-12-162019-10-01Objectvideo Labs, LlcUsing satellite imagery to enhance a 3D surface model of a real world cityscape
US10373365B2 (en)*2017-04-102019-08-06Intel CorporationTopology shader technology
CN111415422B (en)*2020-04-172022-03-18Oppo广东移动通信有限公司Virtual object adjustment method and device, storage medium and augmented reality equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105488840A (en)*2015-11-262016-04-13联想(北京)有限公司Information processing method and electronic equipment
CN110709898A (en)*2017-03-012020-01-17爱威愿景有限公司Video see-through display system
CN109147054A (en)*2018-08-032019-01-04五八有限公司Setting method, device, storage medium and the terminal of the 3D model direction of AR
CN109215413A (en)*2018-09-212019-01-15福州职业技术学院A kind of mold design teaching method, system and mobile terminal based on mobile augmented reality
CN109545003A (en)*2018-12-242019-03-29北京卡路里信息技术有限公司A kind of display methods, device, terminal device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Android系统和手机平台的增强现实的研究与应用;王冠;《中国优秀硕士学位论文全文数据库 信息科技辑》;20131215;I138-1585*

Also Published As

Publication numberPublication date
CN111818265A (en)2020-10-23

Similar Documents

PublicationPublication DateTitle
CN110062176B (en)Method and device for generating video, electronic equipment and computer readable storage medium
CN114677386B (en) Special effects image processing method, device, electronic device and storage medium
CN112929582A (en)Special effect display method, device, equipment and medium
CN114730483A (en) Generate 3D data in a messaging system
CN112802206B (en)Roaming view generation method, device, equipment and storage medium
CN111833459B (en) Image processing method, device, electronic device and storage medium
CN109801354B (en)Panorama processing method and device
CN111818265B (en)Interaction method and device based on augmented reality model, electronic equipment and medium
CN116527993A (en) Video processing method, device, electronic device, storage medium and program product
CN114428573A (en)Special effect image processing method and device, electronic equipment and storage medium
CN112070903A (en)Virtual object display method and device, electronic equipment and computer storage medium
CN109816791B (en)Method and apparatus for generating information
CN115733938B (en) Video processing method, device, equipment and storage medium
CN114040129A (en)Video generation method, device, equipment and storage medium
CN113873156A (en) Image processing method, device and electronic device
CN117395450A (en)Virtual live broadcast method, device, terminal and storage medium
CN113066166B (en) Image processing method, device and electronic device
CN112492230B (en)Video processing method and device, readable medium and electronic equipment
CN113703704A (en)Interface display method, head-mounted display device and computer readable medium
CN114202617A (en)Video image processing method and device, electronic equipment and storage medium
KR20220099584A (en) Image processing method, apparatus, electronic device and computer readable storage medium
CN113704527B (en)Three-dimensional display method, three-dimensional display device and storage medium
CN111200754B (en)Panoramic video playing method and device, terminal and storage medium
CN111200759B (en)Playing control method, device, terminal and storage medium of panoramic video
CN119676511A (en) Method, device, equipment and medium for generating and processing animated images

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp