Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
For convenience of understanding, terms referred to in the embodiments of the present application are explained below:
augmented Reality (AR): the AR technology is a technology for skillfully fusing virtual information and a real world, and a plurality of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like are widely applied, and virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is applied to the real world after analog simulation, and the two kinds of information complement each other, so that the 'enhancement' of the real world is realized.
3D: 3D (three dimensional) is a three dimensional figure, and displaying a 3D figure in a computer is displaying a three dimensional figure in a plane. Unlike the real world, the real three-dimensional space has a real distance space. The image displayed in the computer only looks like the real world, so that the 3D graph displayed on the computer is seen by human eyes as the real world. Since the image has a characteristic of being small and large when viewed with the naked eye, a stereoscopic effect is produced. The computer screen is planar and two-dimensional, and the three-dimensional image which is real as a real object can be appreciated by naked eyes, because the visual illusion is generated by the naked eyes due to the difference of color and gray scale when the computer screen is displayed, and the two-dimensional computer screen is perceived as the three-dimensional image. Based on the knowledge of colorimetry, the convex part of the edge of the three-dimensional object generally shows high brightness color, and the concave part shows dark color due to the shielding of light. This knowledge is widely used for the drawing of buttons, 3D lines in web pages or other applications. For example, 3D text to be drawn is displayed with a high brightness color at the original position, and is outlined with a low brightness color at the lower left or upper right position, so that the 3D text is visually generated. In the concrete implementation, two 2D characters with different colors can be respectively drawn at different positions by using the same font, and 3D characters with different effects can be completely generated visually as long as the coordinates of the two characters are suitable.
In the related art, when the anchor uses the anchor terminal to perform network live broadcasting, the audience terminal can only realize interaction with the anchor terminal by sending the barrage or presenting the virtual gift, the interaction mode is single, the interaction degree is low, and the interest of the audience and the fan in watching the live broadcasting cannot be fully stimulated.
In the embodiment of the application, the anchor terminal triggers the AR object to execute the corresponding interactive action based on receiving the AR object interactive instruction sent by the audience terminal, so that the interaction between the audience terminal and the AR object is realized, and the participation degree of the audience in the live broadcast process is improved.
Fig. 1 is a schematic interface diagram illustrating an implementation process of a live broadcast interaction method according to an embodiment of the present application. Theanchor terminal 110 logs in an anchor account and carries out network live broadcast through a live broadcast application program, the anchor collects live broadcast environment images through a camera in the live broadcast process, and after theanchor terminal 110 selects the AR pet 111, the AR pet 111 is placed on an object displayed in a live broadcast picture, wherein the object can be a floor, a table, a cabinet, a bed and the like in the live broadcast picture.Anchor terminal 110 may control AR pet 111 to show interactive actions toaudience terminal 120 by inputting instructions at any time.
Theaudience terminal 120 logs in a corresponding audience account and watches live webcast through a live webcast room, when theaudience terminal 120 needs to interact with an AR pet placed in the live webcast room by theanchor terminal 110, an interaction opportunity is obtained by transferring virtual resources to the anchor terminal 110 (for example, presenting a virtual gift to the anchor), and after theanchor terminal 110 receives an AR object interaction instruction sent by theaudience terminal 120, the AR pet 111 is controlled to execute a corresponding interaction action, so that the interaction between theaudience terminal 120 and the AR pet 111 is realized.
Fig. 2 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 200 includes: ananchor terminal 210, a server 220, and aviewer terminal 230.
Theanchor terminal 210 has a live application installed and running, and the live application has an AR live function, that is, an AR object can be added to a captured live frame and controlled to execute a corresponding action. The application program can be any one of a game type live broadcast application program, a comprehensive type live broadcast application program, a chat type live broadcast application program, a food type live broadcast application program and a shopping type live broadcast application program.Anchor terminal 210 is the terminal that the anchor used, and logs on corresponding live broadcast application and live in the live broadcast room, and the anchor carries out the live broadcast of network through usinganchor terminal 210, and realizes the interdynamic withaudience terminal 230 through using the live broadcast function of AR to place the AR object in the live broadcast room and make interactive action, and this interactive action includes but is not limited to: changing at least one of a body posture, walking, running, jumping, selling sprouts, mimicking a motion of the AR subject. Illustratively, the anchor controls the AR object to walk or jump in the room by inputting a voice command, controls the AR object (which may be an AR pet or an AR character) to simulate the body movement of the AR object to perform interaction, and the like.
Theanchor terminal 210 is connected to the server 220 through a wireless network or a wired network.
The server 220 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Server 220 provides background services for live applications inanchor terminal 210 andviewer terminal 230. For example, server 220 may be a backend server for applications as described above. In this embodiment, the server 220 may receive a live video stream sent from theanchor terminal 210, and push the live video stream to theviewer terminal 230 that watches the live video stream; optionally, the server 220 is further configured to receive barrage information and the transferred virtual resource sent by theaudience terminal 230, and push the merged live video stream to theanchor terminal 210 and theaudience terminal 230. In addition, server 220 may also receive a request for a connection betweenaudience terminal 230 andanchor terminal 210, so as to implement the connection interaction betweenanchor terminal 210 andaudience terminal 230.
Theviewer terminal 230 is connected to the server 220 through a wireless network or a wired network.
Theviewer terminal 230 has a live application installed and running therein, and the application may be any one of a game-type live application, an integrated live application, a chat-type live application, a food-type live application, and a shopping-type live application. Theaudience terminal 230 is a terminal used by an audience watching a live broadcast, and is installed with a corresponding live broadcast application program, and enters the anchor live broadcast room to watch the live broadcast, theaudience terminal 230 obtains an opportunity to interact with the AR object by transferring a virtual resource to the anchor terminal 210 (e.g., presenting a virtual gift to the anchor), and controls the AR object to perform a corresponding interactive action by sending an AR object interactive instruction, where the interactive action includes but is not limited to: changing at least one of the AR subject's body posture, walking, running, jumping, selling sprouts, mimicking a motion. Illustratively, after theaudience terminal 230 swipes the gift to theanchor terminal 210, the AR object (which may be an AR pet or an AR character) is triggered to rotate around in the room, and the AR object is controlled to follow the body of the AR object to perform an interactive action.
Alternatively, the live applications installed on theanchor terminal 210 and theviewer terminal 230 are the same, or the live applications installed on both terminals are the same type of live application for different control system platforms. Theanchor terminal 210 is the only terminal controlled by the anchor, and theviewer terminal 230 may refer broadly to one of a plurality of terminals, the embodiment being illustrated with only theanchor terminal 210 and theviewer terminal 230. The equipment types of theanchor terminal 210 and theviewer terminal 230 are the same or different, and include: at least one of a smartphone, a tablet, a smart television, a portable computer, and a desktop computer. The following embodiments are illustrated with the terminal comprising a smartphone.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Fig. 3 is a flowchart of a live broadcast interaction method according to an exemplary embodiment of the present application, and this embodiment takes the method as an example for use in the anchor terminal shown in fig. 2. The method comprises the following steps:
step 301, responding to an AR object setting instruction, displaying an AR object in a live broadcast picture, where the live broadcast picture is a picture acquired by a live broadcast terminal through a camera.
When the anchor needs to display the AR object on the live broadcast picture, the AR object selection bar is called out on the user interface, the AR object needing to be displayed is selected from the AR object selection list, the AR object selection list at least comprises one AR object, and the AR object can be a virtual pet, a virtual character, a virtual pendant and the like. The present embodiment will be described by taking an example in which a virtual pet is displayed on a live view.
The AR objects in the AR object selection list are displayed in the form of thumbnails, which may optionally display static icons or dynamic icons. In addition, the thumbnail may be a general two-dimensional picture display or a 3D picture display, and the embodiment does not limit the specific form of the thumbnail.
In a possible implementation mode, when the anchor needs to display the AR objects in the live broadcast picture, the AR object selection list is called up in a finger sliding screen mode, the AR objects in the AR object selection list are displayed in a 3D thumbnail mode, when the 3D thumbnail of the AR object is clicked by a finger, the selected 3D thumbnail is displayed in a dynamic mode, and the dynamic display content can be an interactive action which can be executed by the AR object, so that the anchor can make a selection better.
The selected AR object is a virtual object displayed in a 3D image form in a live broadcast picture, and in order to enable the selected AR object to be skillfully fused with an object in the real world, a live broadcast image needs to be acquired through a camera and a placing object contained in the image needs to be acquired, wherein the placing object is used for placing the selected AR object. For example, when the anchor terminal receives an AR object setting instruction, the selected AR object is displayed on a selected floor or table and the picture is rendered and fused, so that the picture is more realistic to present on a live broadcast picture.
As shown in fig. 4, in a possible implementation, when the anchor terminal determines an AR object, theanchor terminal 400 collects an environment image of an environment where theanchor terminal 400 is located through a camera, identifies a placement object included in the environment image based on an object recognition algorithm, and displays a recognition result in a text form at a position where the placement object is placed, for example, when the AR object selected by the anchor terminal is anAR pet 410, the anchor terminal is prompted to determine to place theAR pet 410 on a specific placement object according to the displayed text, and when the anchor terminal receives an AR object setting instruction, theAR pet 410 is displayed at a corresponding position in a live broadcast screen.
Step 302, receiving an AR object interaction instruction, where the AR object interaction instruction is triggered by a live broadcast terminal or a viewer terminal.
Audience watches live broadcast by using an audience terminal, when the audience needs to interact with an AR object displayed in a live broadcast display picture, an AR object interaction instruction is sent to a main broadcast terminal to trigger the AR object to execute an interaction action, wherein the AR object interaction instruction comprises the interaction action which is required to be executed by the AR object by the audience terminal.
Similarly, the anchor terminal may also receive an AR object interaction instruction triggered by the anchor terminal, that is, the anchor triggers the AR object interaction instruction in the live broadcast process, and controls the AR object to execute a corresponding interaction action.
In one possible implementation, the audience terminal may trigger the AR object interaction instruction by presenting a virtual gift to the anchor terminal. As shown in fig. 5, when theaudience terminal 500 needs to interact with the AR object, theaudience terminal 500 presents a virtual gift, such as a "rocket" or a "pet food", to the anchor terminal, and then sends an AR object interaction instruction to the anchor terminal after the virtual gift is presented.
And step 303, controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction.
When the anchor terminal receives an AR object interaction object instruction triggered by the audience terminal or the anchor terminal, the AR object interaction instruction is acquired to contain interaction actions, and the AR object is controlled to execute corresponding interaction actions.
To sum up, in the embodiment of the application, the AR object is displayed in the picture collected by the live broadcast terminal through the AR object setting instruction, and after receiving the AR object interaction instruction sent by the live broadcast terminal or the audience terminal, the AR object is controlled to execute the corresponding interaction action based on the AR object interaction instruction; by adopting the scheme provided by the embodiment of the application, the fact that the anchor terminal controls the AR object to interact with the audience is realized, the audience terminal can also control the AR object to interact, the interactive mode in the live broadcast process is enriched, and the interactive participation degree of the audience terminal in the live broadcast process is improved.
Fig. 6 is a flowchart of a live broadcast interaction method according to another exemplary embodiment of the present application, and this embodiment takes the method as an example for being used in the anchor terminal shown in fig. 2. The method comprises the following steps.
Step 601, responding to the AR object setting instruction, displaying the AR object in a live broadcast picture, wherein the live broadcast picture is a picture acquired by a live broadcast terminal through a camera.
The implementation manner of this step may refer to step 301, which is not described herein again.
Step 602, receiving an AR object interaction instruction, where the AR object interaction instruction is triggered by a viewer terminal or a broadcaster terminal.
Thestep 302 may be referred to in the implementation manner of this step, and this embodiment is not described herein again.
Step 603, identifying a 3D object in the live environment.
Because the AR object is a virtual object placed in the live broadcast picture according to the acquired environment image, in order to improve the reality of rendering images of the AR object when interactive actions are executed in the live broadcast process, a relatively real live broadcast picture can be rendered by identifying a 3D object contained in the live broadcast picture and according to the position relation between the 3D object and the AR object. The 3D object is an object which is acquired in a live broadcast environment through a camera in the live broadcast process, if a main broadcast starts live broadcast in home, the 3D object possibly contained in a live broadcast picture can be an object such as a table, a bed and furniture in a room. The AR object may touch and block the 3D object in the live view when performing the interactive action. For example, an AR pet in a live broadcast picture moves from one position to another position on a floor, but a table or other objects are arranged in front of the moving path of the AR pet to shield a rendering picture of the AR pet, in order that a display picture of the AR pet and a picture of a live broadcast environment are more real, the AR pet needs to be rendered according to the characteristics of the table, pass through the table, to be shielded by the table until the body is completely shielded by the table, and then to pass through the table to be displayed on the back of the AR pet. Therefore, before controlling the AR object to execute the interactive action, the 3D object in the live environment needs to be identified according to the live frame acquired by the camera, and a more real frame is rendered according to the position information of the 3D object and the AR object. In one possible implementation, a 3D object contained in the live view may be identified by a 3D object recognition algorithm.
And step 604, controlling the AR object to execute the interactive action corresponding to the AR object interactive instruction in the live broadcast environment based on the depth information of each 3D object.
Because the live broadcast picture that the camera was gathered can only show various 3D objects that contain in the live broadcast environment, but will realize that the AR pet carries out abundant interactive action through the 3D object, and when rendering AR object and 3D object, still need acquire the depth information of various 3D objects, determine promptly in the live broadcast picture various 3D objects and AR object's spatial position information and distance information etc. promptly, be convenient for calculate the distance and the time when AR object removes or passes the 3D object to the 3D object. For example, the distance between the AR pet and the table in the live broadcast picture is determined, the time for the AR pet to execute the interactive action is calculated through the walking route and the walking speed set for the AR pet, and the action picture that the AR pet walks to the table and passes through the table is rendered in the picture.
As shown in fig. 7, theAR pet 710 is a virtual image rendered in a live view, the table 720 is a 3D object captured by a camera and displayed in the live view, when a moving path of theAR pet 710 performing an interactive action needs to pass behind the table 720, since the table 720 is an object displayed in front of the AR pet, when the image of theAR pet 710 is rendered, a portion not covered by the table 720 is displayed (in the figure, the head portion of the AR pet is exposed, and other portions are covered by the table), and as theAR pet 710 moves, the body of theAR pet 710 displayed in the live view is changed in real time, and when theAR pet 710 passes behind the table 720 completely, the entire body of theAR pet 710 is rendered in the live view again.
In another possible implementation, when the movement path of the AR pet is drilled in front of the table or under the table, a display screen that the AR pet blocks the table or moves under the table is correspondingly rendered. When the moving path of the AR pet just passes through the desk, a display picture that the AR pet directly passes through the desk or bypasses the desk can be rendered.
In a possible implementation manner, the depth information of the 3D object in the live broadcast picture can be determined by a binocular stereo vision method, that is, two environment images in the same live broadcast environment are simultaneously obtained by two cameras, pixel points corresponding to the same 3D object in the two images are found according to a stereo matching algorithm, time difference information is calculated according to a trigonometric principle, and the parallax information can be used for representing the depth information of the 3D object in the scene through conversion. Based on a stereo matching algorithm, the depth image of the scene can be obtained by shooting a group of images at different angles in the same live broadcast environment. In addition, the depth information can be obtained by analyzing and indirectly estimating the photometric characteristics, the light and shade characteristics and other characteristics of the collected image.
After the depth information of various 3D objects is determined, the AR object is controlled to execute the interaction action corresponding to the AR object interaction instruction in the live broadcast environment based on the depth information of the various 3D objects.
In order to enable AR pets displayed in a live broadcast picture to make richer and real interactive actions, an AR object can be constructed and rendered through point cloud, wherein the point cloud is a massive point set of surface characteristics of a target object, namely the AR object is a virtual image formed by the massive point set. Therefore, the AR object and the live interactive action displayed in the live view are formed based on a large amount of point cloud rendering, and the form, the action, and the like of the AR object are changed by controlling the position change of the point cloud. After the anchor terminal receives the AR object interaction instruction, the change path of the point cloud needing to be controlled is determined based on specific interaction actions, and then the AR object is controlled. Therefore, as shown in fig. 8, step 604 further includes the following steps.
Step 604A, determining a point cloud movement amount of the point cloud when the AR object executes the interaction action corresponding to the AR object interaction instruction, wherein the point cloud is used for controlling the AR object to move.
And when the anchor terminal receives the AR object interaction instruction, acquiring the coordinate information of the current point cloud of the AR object and the coordinate information of the point cloud corresponding to the interaction action to be executed, and calculating the point cloud movement amount of the point cloud when the interaction action is required to be executed.
Schematically, as shown in fig. 9, after receiving an AR object interaction instruction, the anchor terminal obtains a point cloud configuration and point cloud coordinate information of an AR pet in a live broadcast picture at the current time, and calculates a corresponding point cloud configuration and corresponding coordinate information according to an interaction action to be executed (in order to improve the recognition degree, only the point cloud configuring the outline of the AR pet is shown in the figure). As shown in the figure, the arm of the AR pet is in an unfolded state at the current moment, the interaction action indicates that the AR pet carries out the action of closing the arm, and the expression and the form of the AR pet are correspondingly changed. It can be known from the figure that the interaction action changes corresponding to the front and back positions of the point cloud, so that when the interaction action is executed, the point cloud moving amount of each point cloud is calculated by comparing the point cloud twice before and after the interaction action is executed, and the AR pet is controlled to execute the corresponding interaction action by changing the coordinate position of the point cloud.
And step 604B, controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction in the live broadcast environment based on the depth information and the point cloud movement amount of each 3D object.
After the anchor terminal calculates the point cloud movement amount of the AR object, the AR object can be controlled to execute the interaction action corresponding to the AR object interaction instruction in the live broadcast environment according to the depth information of the 3D object and the point cloud movement amount. When the interaction action indicates that the AR object executes continuous action, the position information of the AR object and the coordinate information of the point cloud are calculated in real time, the distance information and the coordinate position change information between the 3D object and the AR object are calculated, the coordinate position of the point cloud is changed continuously, and therefore the AR object is controlled to make rich interaction action in a live broadcast picture, and interaction with a main broadcast or audiences is achieved.
As shown in fig. 10, after the anchor terminal determines the3D object 1010 and the corresponding spatial position coordinates, the corresponding point cloud configuration is determined based on the shape of the3D object 1010, and the distance from theAR pet 1020 to the3D object 1010 is calculated. For example, if the distance between theAR pet 1020 and the3D object 1010 is calculated to be 2 meters, the AD pet is controlled to move according to the moving path indicated by the interaction action, and the form and the action of theAR pet 1020 are changed in real time according to the point cloud moving amount of the point cloud in the moving process until theAR pet 1020 moves behind the3D object 1010, and at this time, a picture of theAR pet 1020 shielded by the3D object 1010 is rendered until theAR pet 1020 completely passes through the3D object 1010, and an arm folding action is performed at a specified position, so that the interaction action is shown to the audience or the main broadcasting.
In the embodiment of the application, when the anchor terminal performs live broadcasting, the 3D object in the live broadcasting picture is identified, the depth information of the 3D object and the position coordinates of the point cloud forming the AR object are determined, so that after an interaction instruction of the AR object is received, the point cloud movement amount of the point cloud during the action execution is determined based on the interaction action, and then a conversion display picture between the AR object and the 3D object during the action execution of the interaction action is rendered, so that the AR object can show richer and real interaction actions.
In a possible application scenario, after receiving an AR interaction instruction sent by a viewer terminal, an anchor terminal needs to acquire interaction data included in the AR object interaction instruction, determine a target interaction action according to the interaction data, and further control an AR object to execute a corresponding target interaction action.
Fig. 11 is a flowchart of a live interaction method according to another exemplary embodiment of the present application, and this embodiment takes the method as an example for being used in the anchor terminal shown in fig. 2. The method comprises the following steps.
Step 1101, responding to an AR object setting instruction, displaying an AR object in a live broadcast picture, where the live broadcast picture is a picture acquired by a live broadcast terminal through a camera.
The implementation manner of this step may refer to step 301, which is not described herein again.
Step 1102, receiving an AR object interaction instruction, where the AR object interaction instruction is triggered by a viewer terminal.
Thestep 302 may be referred to in the implementation manner of this step, and this embodiment is not described herein again.
Step 1103, determining a target interaction action based on interaction data contained in the AR object interaction instruction, where the interaction data is obtained when the audience terminal receives a virtual resource transfer instruction, and the virtual resource transfer instruction is used to trigger the audience account to transfer virtual resources to the live account.
In a possible implementation manner, after the anchor terminal receives the virtual resource transferred by the audience terminal, the level reached by the virtual resource is determined according to the transfer amount of the virtual resource, and then the AR object is triggered to execute a corresponding interactive action according to the level. The level of the virtual resource is positively correlated with the interactive action executed by the AR object, that is, the higher the level of transferring the virtual resource is, the richer the interactive action that can be executed by the AR object is. For example, the audience terminal presents the rocket to the anchor terminal, after the anchor terminal receives the presented rocket, the virtual amount is calculated according to the presented amount, and the AR object is triggered to execute corresponding interactive action, such as tail-shaking action, based on the amount of the virtual amount. Therefore, as shown in fig. 12,step 1103 may further include the following steps.
Step 1103A, obtaining virtual resource transfer amount data included in the AR object interaction instruction.
When the audience needs to interact with the AR object, the virtual resource is transferred to the anchor terminal, so that the chance of interaction with the AR object is obtained, as shown in fig. 5, the virtual resource is a virtual gift given to the anchor terminal by the audience terminal, the virtual gift can be a flower or pet food fed to a virtual pet, and each virtual gift has a corresponding virtual amount, and when the audience terminal receives a virtual resource transfer instruction, the audience account is triggered to transfer the virtual resource to the live broadcast account.
Further, after receiving the AR object instruction, the anchor terminal obtains the virtual resource transfer amount data included in the AR object instruction, and displays an animation corresponding to the virtual resource in the live broadcast screen, and if the virtual gift given by the audience terminal is a rocket, displays a virtual animation launched by the rocket in the live broadcast display screen.
Step 1103B, determining a target interaction action based on the virtual resource transfer amount data, wherein different virtual resource transfer amounts correspond to different interaction actions.
And after the anchor terminal receives the virtual resource transfer amount data, determining the grade of the virtual resource transfer amount, and determining the target interaction action according to the grade. I.e. different virtual resource transfer amounts correspond to different interactive actions.
In a possible implementation manner, the level of the virtual resource transfer amount received by the anchor terminal is positively correlated with the interactive action executed by the AR object, that is, the higher the level of the virtual resource transfer amount is, the more abundant the target interactive action triggered to be executed by the AR object is, or the more interactive actions triggered to be executed by the AR object are.
Illustratively, the level of the virtual resource transfer amount and the target interaction action are shown in table one.
Watch 1
As shown in table one, the anchor terminal determines a corresponding target interaction action according to the level of the received virtual resource transfer amount of the audience terminal, and if the received virtual resource transfer amount of the audience terminal is 30 virtual coins, the target interaction action is determined as a dog jumping to the bed.
Optionally, the target interaction action may also be determined by the viewer himself, after the viewer terminal transfers the virtual resource, the interaction option list is displayed by clicking the interaction control of the user interface, and the AR object is triggered to make the corresponding target interaction action according to the content of the list, as shown in fig. 13, aninteraction control 1310 is disposed at an edge of the user interface of theviewer terminal 1300, after the viewer clicks theinteraction control 1310, an interaction option list 1220 is displayed at the edge of the user interface, and interaction action options that can be triggered by the viewer are displayed in theinteraction option list 1320, where the number of the interaction action options may be determined by a virtual resource transfer amount, that is, the more virtual resources are transferred from the viewer account to the anchor account, the more interaction action options are displayed in theinteraction option list 1320.
In another possible implementation, after the virtual resource is transferred from the audience account to the live account, a prompt message is displayed on a user interface of the audience terminal, where the prompt message is used to prompt a user at the audience terminal to interact with the AR object by clicking a screen, and at this time,step 1103 may further include the following step.
Step 1103C, acquiring interaction gesture data included in the AR object interaction instruction, where the interaction gesture data is used for representing interaction gesture operations on the AR object.
After the audience terminal transfers the virtual resources to the anchor terminal, the opportunity of interaction with the AR object is obtained, optionally, an interaction prompt can be popped out from a live broadcast picture displayed by the audience terminal, and the audience can be prompted to click the AR object in the live broadcast picture and interact with the AR object. The audience who performs virtual resource transfer controls the AR object to perform an interactive action by clicking the screen or sliding the screen, for example, a user at the audience terminal side clicks the AR object to touch the AR object, or manually drags the AR object to move, so as to control the AR object to run in a live broadcast picture according to a dragging track.
In a possible implementation manner, when the AR object is a virtual puppy, the AR object is used to instruct an interactive action of stroking the puppy or walking the dog by an interactive gesture operation of clicking or sliding the virtual puppy in the live view.
Further, after the anchor terminal receives the AR object interaction instruction, acquiring interaction gesture data contained in the AR object interaction instruction, wherein the gesture interaction data is used for representing interaction gesture operation of controlling the AR object by the audience.
Step 1103D, determining a target interaction action based on the interaction gesture operations represented by the interaction gesture data, wherein different interaction gesture operations correspond to different interaction actions.
The anchor terminal determines interactive gesture operation of controlling the AR object by the audience based on the gesture interactive data, and determines target interactive action to be executed by the AR object, wherein specific content of the gesture interactive operation is determined by sliding operation of the audience and content displayed by a live broadcast picture, if the audience clicks the AR object in the live broadcast picture, the AR object is indicated to be touched, and correspondingly, the target interactive action is action such as sitting down or lying down.
In another possible implementation manner, in order to better realize the interaction between the audience terminal and the AR object, the audience terminal may collect facial expressions and/or body movements of a user at the audience terminal side through a camera to control the AR object to simulate, so as to increase the live participation sense of the audience terminal. Accordingly,step 1103 may also include the following steps.
Step 1103E, acquiring interaction behavior data included in the AR object interaction instruction based on the received AR object interaction instruction, where the interaction behavior data is used to represent user behaviors at the audience terminal side, and the user behaviors are acquired by the audience terminal through a camera.
After the audience terminal transfers the virtual resources to the anchor terminal, the audience terminal automatically starts a camera to collect user behaviors at the audience terminal side, the audience makes various actions to control the AR object to simulate interactive behaviors, and the interactive behaviors can be body behaviors, expression behaviors and the like. And the audience terminal identifies the portrait of the picture acquired by the camera, identifies the facial expression and/or limb action of the audience contained in the picture and sends corresponding interaction behavior data to the anchor terminal.
Further, the anchor terminal obtains interaction behavior data contained in the AR object interaction instruction based on the received AR object interaction instruction.
Step 1103F, determining, based on the interactive behavior data, an action of the AR object simulating the user behavior as a target interactive action.
And the anchor terminal determines the action of the AR object simulating the user action as a target interaction action based on the interaction action data, and if the interaction action of the audience is a shaking and blinking action, the target interaction action simulated by the AR object is determined to be the shaking and blinking action.
In a possible implementation manner, when the target interaction action is to simulate the facial expression of a user at the audience terminal side, a facial image in a live broadcast picture is collected and recognized by using a facial recognition algorithm, key data such as the width of the face, the position and the coordinates of the face are determined based on an image gray value, when the facial expression of the user changes, the changed data are sent to an anchor terminal, and the anchor terminal determines the facial expression change amplitude of the user according to the received interaction behavior data, so that an AR object is controlled to simulate the corresponding target interaction action.
When the received interactive data is the body action of the user at the audience terminal side, the key nodes of the human body can be identified based on a human body posture identification algorithm, the body action of the user is determined according to the information such as the motion direction and the acceleration of the key nodes, the body action is further sent to the anchor terminal as the interactive data, the anchor terminal determines the specific body work made by the user based on the received interactive data, and then the AR object is controlled to simulate the corresponding target interactive action.
Taking the AR object simulating the limb movement of the user as an example, as shown in fig. 14, after theviewer terminal 1410 opens the camera, the user performs a head-up and hand-up movement within the camera acquisition range, theviewer terminal 1410 transmits the acquired interaction data to the anchor terminal, and further controls the AR object 1411 to simulate a corresponding target interaction movement.
And step 1104, controlling the AR object to execute a target interaction action corresponding to the AR object interaction instruction.
After the anchor terminal determines that the target interaction action that needs to be executed by the AR object, the AR object is controlled to execute the corresponding target interaction action according to the depth information of the 3D object indicated by the target interaction action and the point cloud movement amount of the AR object, and the specific content may refer to step 604, which is not described herein again in this embodiment.
In the embodiment of the application, when the audience terminal needs to perform live broadcast interaction with the AR object, the opportunity of interaction with the AR object is obtained by transferring the virtual resource to the anchor terminal, the AR object interaction instruction is sent to the anchor terminal, and then the anchor terminal determines the target interaction action based on the interaction data contained in the received AR object interaction instruction.
The anchor terminal can determine a target interaction action based on the level of the virtual resource transfer amount data, and further control the AR object to execute the target interaction action; or the AR object is controlled to execute the target interaction action based on the interaction gesture data at the audience terminal side, so that the audience terminal can independently select the target interaction action of the AR object in a screen sliding mode; in addition, facial expressions and/or limb actions of a user at the audience terminal side can be used as target interaction actions, the interaction effect that the AR object simulates the user actions at the audience terminal side is achieved, the interaction between the audience terminal and the AR object is achieved by adopting the scheme provided by the embodiment of the application, and the interaction participation degree of the audience terminal in the live broadcast process is improved.
In a possible application scenario, when the anchor needs to actively control the interaction between the AR object and the audience, the AR object can be controlled to perform a target interaction action through voice input or key pressing.
When the AR object interaction instruction is triggered by voice input, namely the anchor terminal identifies the voice instruction input by the anchor through a voice identification algorithm, and determines a target interaction action according to the identified semantics, if a keyword 'puppy dancing' is identified from the acquired voice data, the target interaction action needing the AR object to perform dancing is determined, and then the AR puppy is controlled to dance at a corresponding position; when the AR object interaction instruction is triggered by the interaction option operation, namely the anchor displays an interaction option list by clicking a control of the user interface, a target interaction action needing to be triggered is determined in the interaction option list, and the AR object is controlled to execute the target interaction action.
In a possible implementation manner, in order to better show the interactive action of the AR object in the live display picture, after receiving an AR object interactive instruction of the audience terminal or the anchor terminal, the interactive object is determined according to the interactive instruction, and then the corresponding interactive action is executed on the interactive object, so that the interactive object is more truly presented on the live display picture, and richer target interactive actions are executed.
Fig. 15 is a flowchart of a live broadcast interaction method according to another exemplary embodiment of the present application, and this embodiment takes the method as an example for being used in the anchor terminal shown in fig. 2. The method comprises the following steps:
step 1501, responding to the AR object setting instruction, displaying the AR object in a live broadcast picture, wherein the live broadcast picture is a picture acquired by the live broadcast terminal through a camera.
The implementation manner of this step may refer to step 301, which is not described herein again.
Step 1502, receiving an AR object interaction instruction, where the AR object interaction instruction is triggered by the anchor terminal or the audience terminal.
Thestep 302 may be referred to in the implementation manner of this step, and this embodiment is not described herein again.
Step 1503, determining a target interaction action based on the interaction data included in the AR object interaction instruction.
In the implementation of this step, reference may be made to step 1103, and this embodiment is not described herein again.
Step 1504, responding to the interactive object contained in the AR object interactive instruction, performing object identification on the live broadcast picture to obtain an object identification result.
In order to better show the interactive action of the AR object in the live display picture, the anchor terminal or the audience terminal may further specify the AR object to perform the target interactive action at a specific position, that is, control the AR object to perform the target interactive action at the interactive object, for example, the interactive object may be a bed, a table, a chair, an anchor, and the like in a live environment, and make the interactive action on the interactive object by controlling the AR object. For example, the audience terminal selects the target interaction action to be executed by voice input or clicking an interaction option, and then controls the AR object to jump to a table or a main player. And after the anchor terminal determines the target interaction action, extracting an interaction object contained in the target interaction action, and identifying the interaction object in the live broadcast picture by carrying out image identification on the live broadcast picture collected by the camera.
Step 1505, in response to the object recognition result indicating that the live image contains the interactive object, controlling the AR object to move to the display position of the interactive object in the live image.
When the anchor terminal identifies that the live broadcast picture contains the corresponding interactive object, the AR object is determined to execute the target interactive action, and then the AR object is controlled to move to the display position of the interactive object in the live broadcast picture.
As shown in fig. 16, a live broadcast screen is displayed on theviewer terminal 1610, theAR object 1611 lies on the floor, the viewer clicks theAR object 1611 and slides with a finger to the position of the display bed in the live broadcast screen, the gesture interaction operation means to control theAR object 1611 to run from the floor to the bed, when theanchor terminal 1620 acquires the interaction gesture data, the interaction object is determined, and then the specific position of the bed in the screen is identified and confirmed by an image identification algorithm, and when the specific position is determined, theAR object 1611 is controlled to run to the bed.
And step 1506, controlling the AR object to execute the target interaction action corresponding to the AR object interaction instruction at the interaction object.
And after the AR object moves to the display position of the interactive object in the live broadcast picture, controlling the AR object to execute the target interactive action corresponding to the AR object interactive instruction at the interactive object.
As shown in fig. 16, theanchor terminal 1620 controls theAR object 1611 to move to the interactive object and perform a corresponding interactive action based on the received AR object interaction instruction.
In the embodiment of the application, the interactive object contained in the live broadcast picture is identified, so that the AR object is controlled to move to the position of the interactive object to execute the corresponding interactive action, the interactive action of the AR object is more real and is enriched on the live broadcast picture.
In a possible implementation manner, in order to further enrich the content and manner of live broadcast interaction, the anchor can also receive an AR object customized by the target terminal in the live broadcast process, at this time, at least two AR objects can be displayed in a live broadcast picture displayed by the anchor terminal, wherein the customized AR object is only used for executing corresponding interaction action when the target terminal sends an AR object interaction instruction.
As shown in fig. 17, a flowchart of a live interaction method provided in an exemplary embodiment of the present application is shown.
Step 1701, in response to receiving the AR object customized by the target terminal, displays the customized AR object on the live view.
When the target terminal needs to customize the AR object and presents the AR object to the anchor terminal, the custom interface is entered through the custom link in the live application program, the information such as the attribute characteristics, the picture information, the display duration and the custom cost of the AR object to be customized is input in the custom interface, and the customized AR object can be presented to the anchor terminal after the customization is completed. When the anchor terminal receives the AR object customized by the target terminal in the live broadcasting process, the anchor terminal is prompted to place the customized AR object at a specific position of a live broadcasting picture by displaying prompt information on the live broadcasting picture. In addition, the anchor can also customize the corresponding AR object for itself through the customization interface.
Illustratively, as shown in fig. 18, thetarget terminal 1800 inputs information such as attribute characteristics, picture information, display duration, and customization cost of an AR object to be customized through a customization interface, and when the target terminal needs to give the customized AR object to the anchor terminal, the customized AR object is given to the anchor terminal by inputting account information or live room information of the anchor.
Step 1702, in response to receiving the AR object interaction instruction, determining an AR object to be controlled or a customized AR object, and acquiring interaction data included in the AR object instruction to determine a target interaction action.
And when the anchor terminal receives the AR object interaction instruction, acquiring a terminal account included in the AR object interaction instruction, and determining a terminal for sending the AR object interaction instruction based on the acquired terminal account. When the AR object interaction instruction is determined to be sent by the target terminal, determining that the corresponding customized AR object needs to be controlled, when the AR object interaction instruction is not sent by the target terminal, determining that the AR object set by the anchor terminal needs to be controlled, acquiring interaction data contained in the AR object instruction, and determining the target interaction action.
And step 1703, controlling the AR object or customizing the AR object to execute the target interaction action based on the acquired target interaction action.
And when the AR object is determined to be sent by other terminals, controlling other AR objects to execute the target interaction action based on the determined target interaction data.
In the embodiment of the application, the anchor terminal receives the customized AR object given by the target terminal and places the customized AR object in the live broadcast room, after the AR object interaction instruction is received, the AR object or the customized AR object needing to execute the target interaction action is determined according to the AR object interaction instruction, the target terminal and other audience terminals respectively control the corresponding customized AR object and the AR object set by the anchor terminal, and the live broadcast interaction mode is enriched.
In addition, in another possible implementation manner, the customized AR object may be further configured to display the customized AR object in a live view when a target account corresponding to the target terminal enters a live view, and not display the customized AR object in the live view when the target account does not enter the live view.
Please refer to fig. 19, which illustrates a block diagram of a live interactive apparatus according to an embodiment of the present application. The device includes:
adisplay module 1901, configured to respond to an augmented reality AR object setting instruction, and display an AR object in a live view, where the live view is a view acquired by a live terminal through a camera;
an interactioninstruction receiving module 1902, configured to receive an AR object interaction instruction, where the AR object interaction instruction is triggered by the live broadcast terminal or the audience terminal;
aninteraction module 1903, configured to control the AR object to execute an interaction action corresponding to the AR object interaction instruction.
Optionally, theinteraction module 1903 includes:
an identification unit for identifying a 3D object in a live environment;
and the execution unit is used for controlling the AR object to execute the interactive action corresponding to the AR object interactive instruction in the live broadcast environment based on the depth information of each 3D object.
Optionally, the execution unit is configured to:
determining the point cloud movement amount of the point cloud when the AR object executes the interaction action corresponding to the AR object interaction instruction, wherein the point cloud is used for controlling the AR object to move;
and controlling the AR object to execute an interaction action corresponding to the AR object interaction instruction in the live broadcast environment based on the depth information and the point cloud movement amount of each 3D object.
Optionally, the AR object interaction instruction is triggered by the viewer terminal; theinteraction module 1903 further includes:
a first determining unit, configured to determine a target interaction action based on interaction data included in the AR object interaction instruction, where the interaction data is obtained when the audience terminal receives a virtual resource transfer instruction, and the virtual resource transfer instruction is used to trigger an audience account to transfer virtual resources to a live account;
and the first interaction unit is used for controlling the AR object to execute the target interaction action.
Optionally, the first determining unit is configured to:
acquiring virtual resource transfer amount data contained in the AR object interaction instruction;
and determining the target interaction action based on the virtual resource transfer amount data, wherein different virtual resource transfer amounts correspond to different interaction actions.
Optionally, the first determining unit is configured to:
acquiring interaction gesture data contained in the AR object interaction instruction, wherein the interaction gesture data is used for representing interaction gesture operation on the AR object;
and determining the target interaction action based on the interaction gesture operation represented by the interaction gesture data, wherein different interaction gesture operations correspond to different interaction actions.
Optionally, the first determining unit is configured to:
acquiring interaction behavior data contained in the AR object interaction instruction, wherein the interaction behavior data is used for representing user behaviors at the audience terminal side, and the user behaviors are acquired by the audience terminal through a camera;
and determining the action of the AR object imitating the user action as the target interaction action based on the interaction action data.
Optionally, the AR object interaction instruction is triggered by the anchor terminal, and the AR object interaction instruction is triggered by the anchor terminal; theinteraction module 1903 further includes:
the second determining unit is used for responding to the AR object interaction instruction and triggering by voice, and determining a target interaction action through semantic recognition; or, responding to the AR object interaction instruction and triggered by interaction option selection operation, and determining a target interaction action indicated by the selected interaction option;
and the second interaction unit is used for controlling the AR object to execute the target interaction action.
Optionally, the apparatus further comprises:
the identification module is used for responding to the AR object interaction instruction including the interaction object, and carrying out object identification on the live broadcast picture to obtain an object identification result;
the moving module is used for responding to the object identification result to indicate that the interactive object is contained in the live broadcast picture, and controlling the AR object to move to the display position of the interactive object in the live broadcast picture;
theinteraction module 1903 is further configured to:
and controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction at the interaction object.
Optionally, thedisplay module 1901 is further configured to:
displaying at least two AR objects in the live broadcast picture, wherein the at least two AR objects comprise customized AR objects corresponding to a target audience account, and the customized AR objects are customized by the target audience account;
theinteraction module 1903 is further configured to:
and responding to the AR object interaction instruction triggered by the target audience account, and controlling the customized AR object to execute an interaction action corresponding to the AR object interaction instruction.
Optionally, thedisplay module 1901 is further configured to:
and responding to the target audience account being positioned in a live broadcast room, and displaying the customized AR object in the live broadcast picture.
To sum up, in the embodiment of the application, the AR object is displayed in the picture collected by the live broadcast terminal through the AR object setting instruction, and after receiving the AR object interaction instruction sent by the live broadcast terminal or the audience terminal, the AR object is controlled to execute the corresponding interaction action based on the AR object interaction instruction; by adopting the scheme provided by the embodiment of the application, the fact that the anchor terminal controls the AR object to interact with the audience is realized, the audience terminal can also control the AR object to interact, the interactive mode in the live broadcast process is enriched, and the interactive participation degree of the audience terminal in the live broadcast process is improved.
Fig. 20 is a block diagram illustrating a terminal according to an exemplary embodiment of the present application. The terminal 2000 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 2000 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, terminal 2000 includes: aprocessor 2001 and amemory 2002.
Theprocessor 2001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. Theprocessor 2001 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Theprocessor 2001 may also include a main processor and a coprocessor, the main processor being a processor for Processing data in an awake state, also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, theprocessor 2001 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, theprocessor 2001 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Thememory 2002 may include one or more computer-readable storage media, which may be non-transitory. Thememory 2002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium inmemory 2002 is used to store at least one instruction for execution byprocessor 2001 to implement the live interaction method provided by method embodiments herein.
In some embodiments, terminal 2000 may further optionally include: aperipheral interface 2003 and at least one peripheral. Theprocessor 2001,memory 2002 andperipheral interface 2003 may be connected by buses or signal lines. Various peripheral devices may be connected toperipheral interface 2003 through a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of aradio frequency circuit 2004, adisplay 2005, acamera assembly 2006, anaudio circuit 2007, apositioning assembly 2008, and apower supply 2009.
Theperipheral interface 2003 may be used to connect at least one peripheral related to I/O (Input/Output) to theprocessor 2001 and thememory 2002. In some embodiments, theprocessor 2001,memory 2002 andperipheral interface 2003 are integrated on the same chip or circuit board; in some other embodiments, any one or two of theprocessor 2001, thememory 2002, and theperipheral interface 2003 may be implemented on separate chips or circuit boards, which is not limited by this embodiment.
TheRadio Frequency circuit 2004 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. Theradio frequency circuit 2004 communicates with a communication network and other communication devices via electromagnetic signals. Theradio frequency circuit 2004 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, theradio frequency circuit 2004 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. Theradio frequency circuit 2004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, therf circuit 2004 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
Thedisplay screen 2005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When thedisplay screen 2005 is a touch display screen, thedisplay screen 2005 also has the ability to capture touch signals on or over the surface of thedisplay screen 2005. The touch signal may be input to theprocessor 2001 as a control signal for processing. At this point, thedisplay 2005 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments,display 2005 may be one, provided on the front panel of terminal 2000; in other embodiments, thedisplay screens 2005 can be at least two, respectively disposed on different surfaces of the terminal 2000 or in a folded design; in other embodiments,display 2005 may be a flexible display disposed on a curved surface or a folded surface of terminal 2000. Even more, thedisplay screen 2005 can be arranged in a non-rectangular irregular figure, i.e. a shaped screen. TheDisplay screen 2005 can be made of a material such as an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), and the like.
Camera assembly 2006 is used to capture images or video. Optionally,camera assembly 2006 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments,camera assembly 2006 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Theaudio circuitry 2007 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to theprocessor 2001 for processing or inputting the electric signals to theradio frequency circuit 2004 so as to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different positions of theterminal 2000. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from theprocessor 2001 or theradio frequency circuit 2004 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, theaudio circuitry 2007 may also include a headphone jack.
Thepositioning component 2008 is configured to locate a current geographic Location of the terminal 2000 to implement navigation or LBS (Location Based Service). ThePositioning component 2008 may be a Positioning component based on a Global Positioning System (GPS) in the united states, a beidou System in china, or a galileo System in russia.
Power supply 2009 is used to power the various components in terminal 2000. Thepower supply 2009 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When thepower supply 2009 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 2000 also includes one or more sensors 2010. The one or more sensors 2010 include, but are not limited to: acceleration sensor 2011, gyro sensor 2012, pressure sensor 2013, fingerprint sensor 2014, optical sensor 2015, andproximity sensor 2016.
The acceleration sensor 2011 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with theterminal 2000. For example, the acceleration sensor 2011 may be used to detect components of the gravitational acceleration in three coordinate axes. Theprocessor 2001 may control thedisplay screen 2005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 2011. The acceleration sensor 2011 may also be used for acquisition of motion data of a game or a user.
The gyroscope sensor 2012 can detect the body direction and the rotation angle of the terminal 2000, and the gyroscope sensor 2012 and the acceleration sensor 2011 can cooperate to acquire the 3D motion of the user on theterminal 2000. Theprocessor 2001 may implement the following functions according to the data collected by the gyro sensor 2012: motion sensing (such as changing the UI according to a tilt operation of the user), image stabilization at the time of photographing, interface control, and inertial navigation.
Pressure sensors 2013 may be disposed on the side frames of terminal 2000 and/orunderlying display screen 2005. When the pressure sensor 2013 is disposed on the side frame of the terminal 2000, the holding signal of the user to the terminal 2000 can be detected, and theprocessor 2001 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 2013. When the pressure sensor 2013 is disposed at the lower layer of thedisplay screen 2005, theprocessor 2001 controls the operability control on the UI interface according to the pressure operation of the user on thedisplay screen 2005. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 2014 is used for collecting fingerprints of the user, and theprocessor 2001 identifies the identity of the user according to the fingerprints collected by the fingerprint sensor 2014, or the fingerprint sensor 2014 identifies the identity of the user according to the collected fingerprints. Upon identifying that the user's identity is a trusted identity, theprocessor 2001 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 2014 may be disposed at a front, rear, or side of theterminal 2000. When a physical key or vendor Logo is provided on the terminal 2000, the fingerprint sensor 2014 may be integrated with the physical key or vendor Logo.
The optical sensor 2015 is used to collect ambient light intensity. In one embodiment, theprocessor 2001 may control the display brightness of thedisplay screen 2005 according to the ambient light intensity collected by the optical sensor 2015. Specifically, when the ambient light intensity is high, the display luminance of thedisplay screen 2005 is increased; when the ambient light intensity is low, the display luminance of thedisplay screen 2005 is adjusted down. In another embodiment, theprocessor 2001 may also dynamically adjust the shooting parameters of the camera assembly 1106 according to the ambient light intensity collected by the optical sensor 2015.
Theproximity sensor 2016, also known as a distance sensor, is typically disposed on a front panel of theterminal 2000. Theproximity sensor 2016 is used to collect a distance between a user and a front surface of theterminal 2000. In one embodiment, theprocessor 2001 controls thedisplay 2005 to switch from the bright screen state to the dark screen state when theproximity sensor 2016 detects that the distance between the user and the front surface of the terminal 2000 is gradually reduced; when theproximity sensor 2016 detects that the distance between the user and the front surface of the terminal 2000 is gradually increasing, thedisplay screen 2005 is controlled by theprocessor 2001 to switch from a rest screen state to a light screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 20 is not intended to be limiting of terminal 2000 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The application provides a computer-readable storage medium, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the live broadcast interaction method provided by the above method embodiments.
The present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the terminal reads the computer instruction from the computer-readable storage medium, and executes the computer instruction, so that the terminal executes the live broadcast interaction method in any one of the above embodiments.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The present application is intended to cover various modifications, alternatives, and equivalents, which may be included within the spirit and scope of the present application.