The application requires: priority of chinese patent application No. 202111401321.9 entitled "virtual trace display method, apparatus, device, medium, and computer program product," filed on 11/19/2021, the entire contents of which are incorporated herein by reference.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional environment, or a pure fictional environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.
Self-walking chess game: is a novel multi-player fighting strategy game. In the self-propelled chess game, a user can match virtual objects (namely 'chess pieces') by himself to form a virtual object formation, and the virtual object formation is matched with enemy virtual object formations.
A chessboard: the self-propelled chess game competition area is an area for preparing and carrying out competition, and can be any one of a two-dimensional virtual chessboard, a 2.5-dimensional virtual chessboard and a three-dimensional virtual chessboard, which is not limited in the application.
Wherein the board is divided into a battle area and a reserve area. The fighting region comprises a plurality of fighting chess grids with the same size, and the fighting chess grids are used for placing fighting chesses for fighting in the fighting process; the combat area comprises a number of combat chessmen compartments for receiving combat chessmen which do not participate in the combat during the combat but which can be dragged and dropped in the combat area during the combat preparation phase.
Regarding the arrangement manner of the chess grids in the fighting area, in a possible implementation manner, the fighting area comprises n (rows) × m (columns) fighting chess grids, wherein n is an integral multiple of 2, and two adjacent rows of chess grids are aligned or two adjacent rows of chess grids are staggered. In addition, the fighting area is divided into two parts according to the row, namely a local fighting area and an enemy fighting area, and in the preparation stage, the user can only place the chessman in the local fighting area.
Virtual object: refers to a movable object in a virtual environment. The movable object can be a virtual chess piece, a virtual character, a virtual animal, an animation character and the like, such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in a three-dimensional virtual environment. Optionally, the virtual object is a three-dimensional stereo model created based on an animated skeleton technique. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
In the embodiment of the present application, the virtual object includes different combat units in a self-propelled chess game or a main control object that can move freely during a game. The combat units are as follows: the virtual objects may be different pawns or different virtual characters. The user can purchase, sell, upgrade and the like on the virtual object. The main control object can be different virtual characters, and the user can freely move in the virtual game through controlling the main control object to obtain corresponding rewards generated by the game, such as: coin awards, equipment awards, game objects awards, and the like.
Virtual game alignment: is a game play in which at least two virtual objects play against each other in a virtual environment. In the embodiment of the present application, the virtual game is a game composed of at least two rounds of game processes, that is, the virtual game includes multiple rounds of game processes.
Fig. 1 shows a schematic diagram of a virtual trace display method provided in an exemplary embodiment of the present application, please refer to fig. 1, where a current interface displays a virtual combat area including a main control virtual object 110 and a local control virtual object 120, and a user triggers a footprint rendering function 130 corresponding to the main control virtual object 110, where the function is used to trigger the main control virtual object 110 to keep displaying a footprint 140 when the virtual combat area moves, and to control the main control virtual object 110 to move, so that the main control virtual object 110 displays a footprint 140 corresponding to a moving path when the virtual combat area moves, and finally displays a footprint rendering result 150 in the virtual combat area.
Fig. 2 shows a block diagram of an electronic device according to an exemplary embodiment of the present application. The electronic device 200 includes: an operating system 220 and application programs 222.
Operating system 220 is the base software that provides applications 222 with secure access to computer hardware.
Application 222 is an application that supports a virtual environment. Optionally, application 222 is an application that supports a three-dimensional virtual environment. The application 222 may be any one of a virtual reality application, a three-dimensional map program, a self-propelled chess game, an intelligence game, a Third-Person Shooting game (TPS), a First-Person Shooting game (FPS), a Multiplayer Online tactical game (MOBA), and a Multiplayer gunfight survival game. The application 222 may be a stand-alone application, such as a stand-alone three-dimensional game program, or an online application.
In some optional embodiments, the method of the present application may be implemented by a terminal alone, or implemented by a server alone, or implemented by both the terminal and the server.
When the terminal or the server is implemented separately, taking the implementation of the terminal separately as an example, the terminal runs a target application program supporting a virtual environment, and the target application program may be any one of a virtual reality application program, a three-dimensional map program, and a self-propelled chess game.
The target application program may be a standalone application program, such as a standalone 3D game program, an online application program, or a networking application program, in this embodiment, taking the target application program installed in the terminal as the standalone application program, when the target application program runs in the terminal, the terminal displays the virtual battle area, the main control virtual object, and the game virtual object, and when receiving a footprint drawing trigger operation, controls the main control virtual object to move in the virtual battle area, and keeps and displays footprint traces left by a path through which the main control virtual object passes, thereby finally generating a footprint trace drawing result. Alternatively, the terminal may be a desktop computer, a laptop computer, a mobile phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4) player, or the like.
When the terminal and the server are implemented together, please refer to fig. 3 schematically, which shows an implementation environment diagram of an embodiment of the present application. Illustratively, the implementation environment includes a terminal 310, a server 320, and acommunication network 330, wherein the terminal 310 and the server 320 are connected via thecommunication network 330.
The terminal 310 is running a target application that supports a virtual environment. Illustratively, the terminal 310 displays the main control virtual object and the office alignment virtual object through the target application program, when receiving a footprint drawing trigger operation, generates a corresponding trigger request according to the footprint drawing trigger operation, transmits the trigger request to the server 320, the server 320 activates a footprint drawing function according to the trigger request, and feeds back a function activation result to the terminal 310, and when the terminal 310 receives a movement control operation on the main control virtual object, the main control virtual object is controlled to move and display a footprint of the main control virtual object.
Server 320 may be used to provide background services for clients of target applications (e.g., game applications) interminal 310. For example, server 320 may be a backend server for the target application (e.g., game application) described above. It should be noted that the server 320 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
The Cloud Technology (Cloud Technology) is a hosting Technology for unifying series resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied in the cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources, such as video websites, picture-like websites and more web portals. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing.
In some embodiments, the server 320 may also be implemented as a node in a blockchain system. The Blockchain (Blockchain) is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. The block chain, which is essentially a decentralized database, is a string of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, which is used for verifying the validity (anti-counterfeiting) of the information and generating a next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer.
Referring to fig. 4, a method for displaying virtual traces according to an embodiment of the present application is shown, in the embodiment of the present application, taking the method as an example applied to the terminal 310 shown in fig. 3, and taking a self-propelled chess game as an example, the method includes:
step 401, displaying the master control virtual object and the office alignment virtual object.
The main control virtual object is a virtual object which is controlled by the current login target account in a mobile mode, and the game virtual object is a virtual object which is positioned in a virtual game area in a single round of game and participates in virtual game.
In some embodiments, the virtual combat zone includes a display of a master virtual object controlled by a target account currently logged in by a user for indicating the user identity of the target account in the game play (e.g., the user identity corresponding to the player and the user identity corresponding to the enemy player). The control of the main control virtual object comprises the control of the main control virtual object to move freely in the virtual fighting region; or controlling the main control virtual object to obtain the corresponding reward generated after the virtual exchange is finished, for example: after a certain round of virtual game-play is finished, generating corresponding game-play rewards and displaying the game-play rewards in a virtual game-play area in a random distribution manner, wherein the game-play rewards comprise gold coin rewards (used for subsequently purchasing game-play virtual objects), equipment rewards (used for promoting corresponding attribute capacity such as legal value, attack speed, life value and the like to the game-play virtual objects) or game-play virtual object rewards (namely obtaining the random game-play virtual objects free of charge) and the like, and a user controls the main control virtual object to move in the virtual game-play area to obtain the game-play rewards distributed in the virtual game-play area; or, the master control virtual object is controlled to perform action change in the virtual combat area, generate a personalized expression, and the like, which is not limited herein.
In some embodiments, the virtual match area further includes a virtual object for displaying a match, the match virtual object is used for automatically executing a match action according to the object attribute when participating in a virtual match, and the virtual match is a match composed of at least two rounds of match processes. In the embodiment of the present application, the virtual game is a game in a self-propelled chess game, and the virtual object is a unit of battle in the self-propelled chess game. Optionally, the game virtual object corresponds to an object level, and taking a self-propelled chess game as an example, the object level corresponding to the game virtual object may be synthesized by a specified number of equal levels, so as to achieve level enhancement, for example: the two game virtual objects with one star in the level can be synthesized into one game virtual object with two stars in the level, so that corresponding fighting attributes (such as legal values, life values, attack force and the like) of the game virtual objects are improved. Optionally, the game virtual object may be automatically upgraded, for example: when the virtual battle area contains a specified number of equivalent-level game virtual objects and meets the requirement of level promotion, the game virtual objects are automatically synthesized; or, the grade of the local virtual object is promoted manually, that is, the local virtual object meeting the grade promotion requirement is synthesized by the user's own choice.
Optionally, the virtual combat zone includes a preparation combat zone, an operation zone and a combat observing zone, wherein the combat observing zone is located on one side or two sides of the preparation combat zone, or located in an outward extending zone of the operation zone, or located at a vertex angle position of the operation zone, and the virtual combat zone includes at least one combat observing zone without limitation. The virtual object of the match comprises an alternative object positioned in a standby combat area and a combat object positioned in a combat area, when the virtual object of the match comprises a plurality of alternative objects, a user can independently select the standby combat object to move to the combat area to serve as the combat object, or a server automatically selects the standby combat object to move to the combat area to serve as the combat object, when the virtual match starts, the main control virtual object is positioned in a watching combat area, and in the process of virtual match, the combat object in the combat area performs automatic virtual match. Notably, the master virtual object may be free to move within the combat, reserve or spectator area.
Optionally, the virtual combat zone includes at least one master virtual object and at least one game virtual object. Each account correspondingly has a virtual combat zone which can be set in a personalized way, and when virtual combat is carried out, the virtual combat zone corresponding to any account of two parties of a station is randomly selected to carry out virtual combat, such as: the virtual fighting area corresponding to the account A is 'grassland', the virtual fighting area corresponding to the account B is 'camp', and when the account A and the account B are matched for carrying out the game, the interfaces of the two parties display and select 'grassland' as the virtual fighting area of the game in the current round, so that the target account and the virtual fighting area have a corresponding relationship or do not have a corresponding relationship, and the method is not limited.
Step 402, receiving a footprint rendering trigger operation.
And the footprint drawing triggering operation is used for triggering the main control virtual object to reserve and display footprint traces in the virtual battle area.
In some embodiments, the terminal receives a footprint drawing trigger operation from a user, where the footprint drawing trigger operation correspondingly triggers a footprint drawing function of the main control virtual object, the footprint drawing function indicates that a footprint left by the main control virtual object through movement is displayed on a terminal interface in a process of controlling the main control virtual object to move, and a display mode of the footprint includes at least one of the following modes:
1. the display mode of the footprint trace comprises continuous display, namely that each time the main control virtual object moves, the footprint trace corresponding to the movement is displayed in the virtual opposite combat area, for example: when the main control virtual object walks one step, a footprint trace corresponding to the step is displayed in the virtual battle area;
2. the display mode of the footprint trace comprises interval display, namely setting a moving threshold, and when the number of times that the main control virtual object moves in the virtual fighting area reaches the moving threshold, displaying the corresponding footprint trace in the virtual fighting area, such as: the moving threshold is three steps, and when the main control virtual object walks three steps, the footprint traces corresponding to the three steps are displayed in the virtual battle area;
3. the display mode of the footprint trace comprises random display, which means that the main control virtual object displays the footprint trace randomly in the moving process, for example: and when the main control virtual object walks three steps, randomly displaying the footprint corresponding to the first step and the third step in the virtual fight area.
It should be noted that the display manner of the footprint is only an illustrative example, and the specific display manner of the footprint in the embodiment of the present application is not limited at all.
Schematically, the footprint includes real-time display or non-real-time display, which is exemplified by taking the moving mode of the main control virtual object as a walking mode, the real-time display includes controlling the main control virtual object to walk each step in the walking process, and displaying the footprint corresponding to the previous step in the virtual battle area (or displaying the footprint corresponding to the previous steps in each step, the number of steps can be set according to the actual requirement), when the main control virtual object stops moving, the footprint also stops continuously displaying (namely only displaying the existing footprint in the current virtual battle area); the non-real-time display includes that after the main control virtual object finishes moving, a footprint trace corresponding to the moving process is displayed in the virtual opposite combat area, and the display is not limited herein.
Optionally, the trigger operation includes a terminal click operation, a terminal long-press operation, a terminal sliding operation, or a motion control operation (for example, "shake" or the like) performed on the terminal, which is not limited herein.
In some embodiments, after triggering the footprint rendering function of the master virtual object, the usage of the function includes at least one of the following modes:
1. the attributes of the footprint trace, including shape, color, size, transparency and the like, are set in a personalized manner, and the master control virtual object is controlled to move to generate different footprint traces, so that footprint drawing is realized;
2. and controlling the main control virtual object to move according to the preset route through the preset route so as to form a footprint drawing pattern corresponding to the preset route.
It should be noted that the above-mentioned usage of the footprint rendering function is only an illustrative example, and the embodiment of the present application does not limit the specific usage of the footprint rendering function.
Optionally, the footprints of different main control virtual objects are the same or different, and the footprints of the same main control virtual object in different rounds of virtual game alignment are the same or different, which is not limited herein, and the footprints can be automatically changed or customized individually by the user.
Step 403, receiving a movement control operation for the master virtual object.
Wherein the movement control operation is used for controlling the main control virtual object to move in the virtual fighting area.
Illustratively, the movement mode for controlling the main control virtual object to move includes at least one of walking, jumping, rolling, crawling, flying, gliding, and the like, which is not limited herein.
In some embodiments, the control manner in which the user controls the master virtual object to move includes at least one of the following manners:
1. the control movement is realized by triggering the main control virtual object, such as: long-pressing the main control virtual object, keeping the long-pressing state, and simultaneously performing sliding operation in the virtual fighting area, thereby realizing that the main control virtual object slides in the virtual fighting area;
2. controlling the main control virtual object to move through the moving control, such as: a movable wheel disc is displayed in the interface, and a user controls the movable wheel disc to slide upwards to enable the main control virtual object to move upwards;
3. and controlling the main control virtual object to move through the direction control, such as: displaying a direction control in the interface, wherein the direction control comprises a front key, a rear key, a left key and a right key, and controlling the main control virtual object to move forwards by clicking a forward key;
4. the control movement of the main control virtual object is realized by setting a target position in the virtual combat zone, that is, a user sets a target position in the virtual combat zone (for example, clicking a screen to determine the target position or selecting a position as the target position in a preset position, which is not limited herein), and the main control virtual object moves from the current position to the target position by taking the target position as a movement end position, wherein a movement path comprises linear movement, curved movement and the like, which is not limited herein;
5. the movement of the main control virtual object is controlled by drawing a path in the virtual fight area, that is, the path drawing is performed by the user in the virtual fight, such as: and drawing a circle in the virtual battle area, and after the circle is drawn, moving the main control virtual object according to a path corresponding to the circle.
It should be noted that the above-mentioned manner for controlling the master virtual object is only an illustrative example, and the embodiment of the present application does not limit any specific manner for controlling the master virtual object to move.
And step 404, displaying a footprint drawing result generated after the main control virtual object moves in the virtual battle area based on the footprint drawing trigger operation and the movement control operation.
In some embodiments, a footprint drawing function is triggered, a main control virtual object is controlled to move in a virtual battle area, a footprint corresponding to the movement process of the main control virtual object is displayed, and a footprint drawing result corresponding to the footprint is finally displayed in the virtual battle area, wherein a footprint drawing result generation mode comprises at least one of the following modes:
1. taking the footprint trace generated by the active virtual object movement as a footprint trace drawing result, namely, when the connection shape of the footprint trace is a circle, the footprint trace drawing result is the circle displayed by the footprint trace combination;
2. and the virtual war zone comprises three-dimensional display elements, a corresponding element mapping image is determined, and the element mapping image and the footprint trace are fused to serve as a final displayed footprint trace drawing result.
It should be noted that the above-mentioned generation manner of the footprint rendering result is only an illustrative example, and the embodiment of the present application does not limit this.
Optionally, the footprint rendering result is displayed in the terminal interface with a dynamic effect, such as: the current interface displays the light-changing special effect of the footprint trace or displays the flickering special effect of the footprint trace and the like; alternatively, the footprint rendering result is displayed in the terminal interface as a static effect, which is not limited herein.
Illustratively, the footprint drawing result keeps a display state in the virtual game-alignment process, or disappears after the virtual game-alignment of a single game is finished, or disappears after the virtual game-alignment of several games is finished, or the interface includes a clearing control, and the user clears the footprint by triggering the clearing control, without limitation.
It should be noted that the above-mentioned generation manner of the footprint rendering result is only an illustrative example, and the specific generation manner of the footprint rendering result in the embodiment of the present application is not limited at all.
In summary, the embodiment of the present application provides a virtual trace display method, a footprint trace drawing function of a main control virtual object in a virtual combat zone is triggered, a mode that a mobile main control virtual object moves in the virtual combat zone is controlled, a footprint trace left by the movement of the main control virtual object is displayed in the current virtual combat zone, a footprint trace drawing result is finally generated, the display of the footprint trace of the main control virtual object is retained, game playability of a user in a virtual game-to-game process is enriched, and meanwhile, the display of the footprint trace drawing result in the virtual combat zone can expand a social contact mode of the user in the game-to-game process, so that interaction and communication with other players are facilitated, and game interactivity is enhanced.
In an alternative embodiment, the footprint rendering trigger operation includes a function activation trigger operation, and referring to fig. 5, schematically, a flowchart of a virtual trace display method provided in an exemplary embodiment of the present application is shown, which is described by taking the method as an example applied to the terminal 310 shown in fig. 3, and taking a self-propelled chess game as an example, the method includes:
step 501, displaying the main control virtual object and the office alignment virtual object.
The main control virtual object is a virtual object which is controlled by the current login target account in a mobile mode, and the game virtual object is a virtual object which is positioned in a virtual game area in a single round of game and participates in virtual game.
The description of the main control virtual object and the local virtual object instep 501 is described in detail instep 401, and is not described here again.
Step 502, receiving a function activation trigger operation.
The function activation triggering operation is used for activating the social function of the master virtual object in the virtual battle area.
In some embodiments, the function activation triggering operation refers to triggering a social function of the master virtual object, including performing a text conversation, a voice conversation, sending an emoticon, controlling the master virtual object to perform footprint drawing, and the like with other users in the game, which is not limited herein.
In some embodiments, the activation mode includes performing a trigger operation on the master virtual object, such as performing a click operation on the master virtual object, performing a long-time press operation on the master virtual object, and the like; or the virtual combat zone includes an activation control, and the activation control is subjected to trigger operations, including click operations, sliding operations, long-time pressing operations, and the like, which are not limited herein.
And step 503, responding to the function activation triggering operation, and displaying the footprint drawing triggering control.
And the footprint drawing trigger control correspondingly keeps displaying footprint traces.
In some embodiments, in response to a function activation trigger operation, displaying a function activation wheel, the function activation wheel to display a control corresponding to a social function; and displaying a footprint drawing trigger control in the function activation wheel.
Illustratively, after receiving a function activation triggering operation, displaying a function activation wheel in the virtual combat area, wherein the function activation wheel comprises a social function corresponding control, such as: a text conversation trigger control, a voice conversation trigger control, an emoticon set trigger control (used for triggering the emoticon set so as to select the emoticon to send) and a footprint drawing trigger control. Referring to fig. 6, which shows a schematic diagram of a display method of a function-activated wheel according to an exemplary embodiment of the present application, as shown in fig. 6, a virtual engagement area of fig. 6 includes a mastervirtual object 610, a user performs a long-press operation 620 on the mastervirtual object 610 to display a function-activatedwheel 630 in the virtual engagement area, and the function-activatedwheel 630 includes a footprint drawingtrigger control 640.
And 504, receiving the triggering operation of the footprint drawing triggering control as the footprint drawing triggering operation.
In some embodiments, as shown in fig. 6, thefunction activation wheel 630 includes a footprint drawingtrigger control 640, and the user triggers the footprint drawing function of the main controlvirtual object 610 by performing a trigger operation on the footprint drawingtrigger control 640, that is, displaying thefootprint drawing trigger 640 in the virtual battle area, and using this trigger operation as the footprint drawing trigger operation.
Steps 5051 through 5052 are two ways of controlling the movement of the master virtual object.
At step 5051, a selected operation to a target location in the virtual war zone is received as a movement control operation.
The selecting operation is used for indicating the main control virtual object to move to the target position in a preset moving path.
In some embodiments, the target position refers to a position where the main control virtual object finally stays after moving, the user triggers the virtual battle area, and the trigger point is used as the target position, the trigger mode includes at least one of a click operation, a long press operation, and a sliding operation, and the selection mode of the target position includes at least one of the following modes:
1. a user triggers any position in the virtual battle area to serve as a target position;
2. displaying a candidate position set on an interface, and taking a candidate position as a target position by selecting a certain candidate position by a user, wherein the candidate position is as follows: at least one candidate position is displayed in the current virtual battle area, and the user takes the candidate position as a target position by triggering the candidate position.
It should be noted that the above-mentioned selection manner of the target position is only an exemplary example, and the specific selection manner of the target position in the embodiment of the present application is not limited at all.
Optionally, after the target position is selected, a preset moving path between the current position of the main control virtual object and the target position is set, and the main control virtual object moves from the current position to the target position through the preset moving path, where the preset moving path includes a linear distance between the two positions, or a curved distance (the curve may be set according to requirements), and is not limited herein.
Optionally, each time the user selects one target position, the main control virtual object is controlled to move to the target position and then a next target position is selected; or, the user selects a plurality of target positions at a time, and a trigger sequence exists between each target position, and the master control virtual object moves in sequence according to the trigger sequence of each target position until the master control virtual object moves to the last triggered target position, which is not limited herein.
At step 5052, the path drawing operation is received as a move control operation.
The path drawing operation corresponds to a customized path, and the path drawing operation is used for controlling the main control virtual object to move along the customized path.
In some embodiments, the path drawing operation refers to a user drawing a customized path in the virtual battle area, and the master virtual object moves according to the customized path, wherein the manner of drawing the customized path includes at least one of the following manners:
1. a user performs sliding operation in the virtual battle area, displays a line corresponding to the sliding operation, and takes the line as a customized path;
2. and displaying a drawing control on the interface, and drawing a route in the virtual battle area by triggering the drawing control or controlling the drawing control by a user, wherein a drawing result is used as a customized path.
It should be noted that the above-mentioned drawing manner of the customized path is only an illustrative example, and the embodiment of the present application does not limit any specific drawing manner of the customized path.
And step 506, displaying the moving process of the main control virtual object in the virtual combat area based on the footprint drawing trigger operation and the movement control operation.
Wherein, real-time footprint traces are generated in the virtual battle area during the moving process.
In some embodiments, after the target position is selected or the customized path is drawn, the main control virtual object moves towards the target position or moves along the customized path, the moving process of the main control virtual object is displayed in the virtual fighting area, and a real-time footprint is generated in the moving process of the main control virtual object, and the real-time footprint is used for indicating the moving path of the main control virtual object.
And 507, responding to the termination of the movement in the virtual combat area of the main control virtual object, and displaying a footprint drawing result based on a footprint generated by the movement of the main control virtual object.
In some embodiments, after the main control virtual object reaches the target position or moves to the end point along the customized path, the movement is stopped, at this time, a footprint trace generated by the main control virtual object in the movement process is displayed in the virtual battle area, and a footprint trace drawing result is finally displayed based on the footprint trace, wherein a display mode of the footprint trace drawing result is described in detail in the following embodiments.
In summary, the embodiment of the present application provides a virtual trace display method, a footprint trace drawing function of a main control virtual object in a virtual combat zone is triggered, a mode that a mobile main control virtual object moves in the virtual combat zone is controlled, a footprint trace left by the movement of the main control virtual object is displayed in the current virtual combat zone, a footprint trace drawing result is finally generated, the display of the footprint trace of the main control virtual object is retained, game playability of a user in a virtual game-to-game process is enriched, and meanwhile, the display of the footprint trace drawing result in the virtual combat zone can expand a social contact mode of the user in the game-to-game process, so that interaction and communication with other players are facilitated, and game interactivity is enhanced.
In this embodiment, the footprint drawing function is triggered by triggering the function to activate the wheel disc, and the main control virtual object is controlled to move in the virtual combat zone in various ways, including selecting a target position to move the main control virtual object toward the target position, or drawing a customized route in the virtual combat zone, so that the main control virtual object moves along the customized route and finally stops, and the footprint drawing result is displayed in the virtual combat zone, thereby enriching the diversity of the game playing methods of the user and improving the human-computer interaction frequency.
In an alternative embodiment, the display manner of the footprint rendering result includes multiple manners, and for an exemplary purpose, please refer to fig. 7, which shows a flowchart of a virtual trace display method provided in an exemplary embodiment of the present application, as shown in fig. 7, whereinsteps 707 to 7082 are performed aftersteps 501 to 506, and the method includes:
and 707, fusing the footprint trace generated by the movement of the main control virtual object with the area background corresponding to the virtual combat area to obtain a footprint trace drawing result.
In some embodiments, based on the three-dimensional display element contained in the region background, the footprint trace generated by the movement of the main control virtual object is fused with the element mapping image corresponding to the three-dimensional display element, so as to obtain a footprint trace drawing result; or, based on the regional display element containing the virtual fighting region in the regional background, the footprint trace generated by the movement of the main control virtual object is fused with the regional display element, so as to obtain a footprint trace drawing result.
Illustratively, the area background corresponding to the virtual combat area includes a plurality of scene elements, and the scene elements include static elements or animation elements, or the scene elements include three-dimensional display elements or area display elements, where the three-dimensional display elements correspond to three-dimensional stereoscopic effects of the scene elements, and the area display elements correspond to planar effects of the scene elements, which is not limited herein, such as: taking a camp scene as an example, the three-dimensional display elements include houses, trees, and the like, and the area display elements include lawns, stones, and the like.
Optionally, in an optional method, an element mapping image corresponding to the three-dimensional display element is obtained, that is, the three-dimensional display element is mapped to an element map corresponding to the plane, the element mapping image is fused with the footprint trace of the main control virtual object, and the result is displayed as a footprint trace drawing result, for example: taking a camp scene as an example, the final footprint drawing result is displayed as an element mapping image including three-dimensional display element mapping of houses, trees and the like in the camp scene, and the footprint of the main control virtual object is displayed in the lawn.
Optionally, in another optional method, an area display image corresponding to the area display element is determined, the area display image is fused with a footprint trace of the main control virtual object, and the area display image is displayed as a footprint trace drawing result, where: taking a runway scene as an example, the runway scene comprises runway elements, a runway image corresponding to the runway elements is determined, a user controls the main control virtual object to move on the runway to generate a footprint trace, the runway image and the footprint trace are fused, and finally, the image containing the footprint trace in the runway is displayed as a footprint trace drawing result. Namely, acquiring a region display image corresponding to the region display element, wherein the region display image is a planar image corresponding to the region scene element; and fusing the footprint generated by the movement of the main control virtual object with the area display image to obtain a footprint drawing result.
Optionally, the merging method includes displaying the footprint in combination with the area background corresponding to the virtual battle area, or automatically (or manually by a user) adjusting the brightness of the footprint to be consistent with the brightness of the area background, or adjusting the size of the footprint, or adjusting the direction of the footprint, which is not limited herein.
Steps 7081 to 7082 are two ways of removing the footprint result.
And 7081, responding to the end of the nth wheel bureau, and displaying the disappearance process of the footprint trace drawing result, wherein n is a positive integer.
In some embodiments, the virtual opponent is virtually combated in each round of opponent, the user controls the main control virtual object to perform foot print trace drawing in the virtual opponent area, the foot print trace drawing result is displayed in the virtual opponent area, when the nth wheel is finished, the foot print trace drawing result disappears, and the disappearance process of the foot print trace is displayed in the virtual opponent area, wherein the disappearance process includes gradual disappearance, namely the foot print traces disappear one by one, or disappear at the same time, namely all the foot print traces disappear at the same time, and the like, and the limitation is not made herein.
Step 7082, a disappearing process of the footprint trace drawing result is displayed in response to the receiving trace disappearing trigger operation.
And the trace disappearance triggering operation is used for indicating to cancel the display of the footprint drawing result.
In some embodiments, the display trace disappearance triggers a control; and displaying a disappearance process of the footprint trace drawing result in response to receiving the trigger operation on the trace disappearance trigger control as a trace disappearance trigger operation.
Illustratively, a trace disappearance triggering control is displayed on the terminal interface, the control corresponds to a trace disappearance function, a user triggers the trace disappearance function by triggering the trace disappearance triggering control, wherein the triggering operation includes at least one of a click operation, a long press operation and a sliding operation, after the trace disappearance function is triggered, a disappearance process of the footprint drawing result is displayed in the virtual battle area, the disappearance process includes gradual disappearance or simultaneous disappearance, and no limitation is made here.
In summary, the embodiment of the present application provides a virtual trace display method, a footprint trace drawing function of a main control virtual object in a virtual combat zone is triggered, a mode that a mobile main control virtual object moves in the virtual combat zone is controlled, a footprint trace left by the movement of the main control virtual object is displayed in the current virtual combat zone, a footprint trace drawing result is finally generated, the display of the footprint trace of the main control virtual object is retained, game playability of a user in a virtual game-to-game process is enriched, and meanwhile, the display of the footprint trace drawing result in the virtual combat zone can expand a social contact mode of the user in the game-to-game process, so that interaction and communication with other players are facilitated, and game interactivity is enhanced.
In this embodiment, the footprint drawing result is displayed in multiple ways, and the footprint drawing result is cancelled and displayed in multiple ways, so that the user can have the opportunity to draw the footprint multiple times, the footprint is displayed in multiple ways, and the game interest of the user is improved.
In an alternative embodiment, please refer to fig. 8 schematically, which shows a flowchart of a virtual trace display method provided in an exemplary embodiment of the present application, and as shown in fig. 8, the method includes:
step 801, trigger the master virtual object.
Optionally, the main control virtual object and the opposite virtual object in the virtual opposite combat area are displayed on the interface, and the user activates the wheel disc by triggering the main control virtual object and displaying the function activation wheel disc. The virtual battle area includes the local battle area or the enemy user battle area, which is not limited herein. Referring to fig. 9 schematically, which shows a schematic diagram of a virtual trace display method provided in an exemplary embodiment of the present application, as shown in fig. 9, a user presses a mastervirtual object 901 for a long time, and displays afunction activation wheel 902 in a virtual battle area, where thefunction activation wheel 902 includes a footprint drawingtrigger control 903.
Step 802, a trigger function activates a wheel.
Optionally, as shown in fig. 9, a click operation is performed on a footprint drawingtrigger control 903 included in thefunction activation wheel 902, a footprint drawing function corresponding to the master control virtual object is activated, and the footprint drawingtrigger control 903 is displayed in the virtual battle area.
And step 803, triggering the footprint to draw the trigger control.
Optionally, as shown in fig. 9, the footprint drawingtrigger control 903 is located on the right side of the interface (not shown in the figure), and the user controls the main controlvirtual object 901 to move in the virtual battle area by pressing the footprint drawingtrigger control 903 for a long time.
And step 804, displaying footprint traces.
Optionally, as shown in fig. 9, the interface includes amovement control 904 located on the left side of the interface, and when the user controls the main controlvirtual object 901 to move in the virtual battle area while drawing thetrigger control 903 by pressing the foot print for a long time, afoot print trace 905 corresponding to the main controlvirtual object 901 is displayed in the virtual battle area.
Step 805, footprint trace is not displayed.
Optionally, as shown in fig. 9, when the user cancels the long-press footprint drawingtrigger control 903 and controls the main controlvirtual object 901 to move in the virtual battle area, thefootprint 905 corresponding to the main controlvirtual object 901 is not displayed (this process is not shown in fig. 9).
And step 806, displaying the footprint drawing result.
Alternatively, as shown in fig. 9, when the user controls the mastervirtual object 901 to stop moving in the virtual match area, thefootprint rendering result 906 is displayed in the virtual match area.
In summary, the embodiment of the present application provides a virtual trace display method, a footprint trace drawing function of a main control virtual object in a virtual combat zone is triggered, a mode that a mobile main control virtual object moves in the virtual combat zone is controlled, a footprint trace left by the movement of the main control virtual object is displayed in the current virtual combat zone, a footprint trace drawing result is finally generated, the display of the footprint trace of the main control virtual object is retained, game playability of a user in a virtual game-to-game process is enriched, and meanwhile, the display of the footprint trace drawing result in the virtual combat zone can expand a social contact mode of the user in the game-to-game process, so that interaction and communication with other players are facilitated, and game interactivity is enhanced.
In the embodiment of the application, a user generates the footprint trace by triggering the footprint trace drawing function of the main control virtual object and controlling the main control virtual object to move in the virtual match area, and finally, the footprint trace drawing result is displayed in the virtual match area, and is visible to the user and enemy players in the match in the virtual match process, and meanwhile, the user can store the footprint trace drawing result in the modes of video recording, screenshot and the like and share the footprint trace drawing result on social media. The user can utilize the footprint traces of different master control virtual objects and the background pictures of different virtual combat areas to provide a personalized footprint trace drawing mode for the user, so that the man-machine interaction is enhanced, and meanwhile, the social contact mode among the users in the virtual combat area is also expanded.
FIG. 10 illustrates a block diagram of a virtual trace display apparatus provided in one embodiment of the present application. The device has the functions of realizing the method examples, and the functions can be realized by hardware or by hardware executing corresponding software. The apparatus may include:
adisplay module 1010, configured to display a main control virtual object and a game virtual object, where the main control virtual object is a virtual object that is controlled by a currently logged target account, and the game virtual object is a virtual object that is located in a virtual game area in a single round of game and participates in a virtual game;
areceiving module 1020, configured to receive a footprint drawing trigger operation, where the footprint drawing trigger operation is used to trigger the main control virtual object to reserve a footprint to be displayed in the virtual combat zone;
thereceiving module 1020 is further configured to receive a movement control operation on the master virtual object, where the movement control operation is used to control the master virtual object to move in the virtual combat area;
thedisplay module 1010 is further configured to display a footprint drawing result generated after the main control virtual object moves in the virtual battle area based on the drawing trigger operation and the movement control operation.
In an optional embodiment, thereceiving module 1020 is further configured to receive a function activation triggering operation, where the function activation triggering operation is configured to activate a social function of the master virtual object in the virtual battle field; responding to the function activation triggering operation, displaying a footprint drawing triggering control, and correspondingly reserving and displaying the footprint trace by the footprint drawing triggering control; and receiving a trigger operation of the footprint drawing trigger control as the footprint drawing trigger operation.
In an optional embodiment, thedisplay module 1010 is further configured to display a function activation wheel in response to the function activation triggering operation, where the function activation wheel is configured to display a control corresponding to the social function; displaying the footprint rendering trigger control in the function-activated wheel.
In an optional embodiment, thereceiving module 1020 is further configured to receive, as the movement control operation, a selected operation on a target location in the virtual battle area, where the selected operation is used to instruct the master virtual object to move to the target location in a preset movement path; or receiving a path drawing operation as the movement control operation, where the path drawing operation corresponds to a customized path, and the path drawing operation is used to control the master virtual object to move along the customized path.
In an alternative embodiment, as shown in fig. 11, thedisplay module 1010 includes:
a display unit 1011, configured to display a moving process of the main control virtual object in the virtual combat area based on the footprint drawing trigger operation and the movement control operation, where a real-time footprint trace is generated in the virtual combat area in the moving process;
the display unit 1011 is further configured to display the footprint drawing result based on the footprint generated by the movement of the main control virtual object in response to the main control virtual object terminating the movement in the virtual combat area.
In an optional embodiment, the display unit 1011 is further configured to fuse the footprint generated by the movement of the main control virtual object with an area background corresponding to the virtual combat area, so as to obtain a result of drawing the footprint.
In an optional embodiment, the display unit 1011 is further configured to fuse, based on that the area background includes a three-dimensional display element, the footprint generated by moving the main control virtual object with an element mapping image corresponding to the three-dimensional display element, so as to obtain a drawing result of the footprint; or based on the regional display element containing the virtual battle region in the regional background, fusing the footprint trace generated by the movement of the main control virtual object with the regional display element to obtain the footprint trace drawing result.
In an optional embodiment, the display unit 1011 is further configured to determine the area display image corresponding to the area display element, where the area display image is a planar image corresponding to the area display element; and fusing the footprint generated by the movement of the main control virtual object with the area display image to obtain a footprint drawing result.
In an optional embodiment, thedisplay module 1010 is further configured to display a disappearing process of the footprint rendering result in response to an nth wheel pair station ending, where n is a positive integer; or responding to the received trace disappearance triggering operation, and displaying the disappearance process of the footprint drawing result, wherein the trace disappearance triggering operation is used for indicating to cancel the display of the footprint drawing result.
In an optional embodiment, thedisplay module 1010 is further configured to display a trace disappearance trigger control; and responding to the received trigger operation of the trace disappearance trigger control as the trace disappearance trigger operation, and displaying the disappearance process of the footprint trace drawing result.
In summary, the embodiment of the present application provides a virtual trace display device, which displays a footprint trace left by a movement of a main control virtual object in a current virtual combat area by triggering a footprint trace drawing function of the main control virtual object in the virtual combat area and controlling a manner of moving the main control virtual object in the virtual combat area, and finally generates a footprint trace drawing result, wherein the footprint trace of the main control virtual object is retained and displayed, so that game playability of a user in a virtual game-to-game process is enriched, and meanwhile, the social mode of the user in the game-to-game process can be expanded by displaying the footprint trace in the virtual combat area, so that interaction and communication with other players are facilitated, and game interactivity is enhanced.
It should be noted that: the virtual trace display apparatus provided in the above embodiment is only illustrated by the division of the above functional modules, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the virtual trace display apparatus and the virtual trace display method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 12 shows a block diagram of a terminal 1200 according to an exemplary embodiment of the present application. The terminal 1200 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1200 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
In general, terminal 1200 includes: aprocessor 1201 and amemory 1202.
Theprocessor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. Theprocessor 1201 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Theprocessor 1201 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, theprocessor 1201 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, theprocessor 1201 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1202 may include one or more computer-readable storage media, which may be non-transitory.Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium inmemory 1202 is used to store at least one instruction for execution byprocessor 1201 to implement the virtual office based control method provided by the method embodiments herein.
In some embodiments, the terminal 1200 may further optionally include: aperipheral interface 1203 and at least one peripheral. Theprocessor 1201,memory 1202, andperipheral interface 1203 may be connected by a bus or signal line. Various peripheral devices may be connected toperipheral interface 1203 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one ofradio frequency circuitry 1204,display 1205,camera assembly 1206,audio circuitry 1207, andpower supply 1208.
Theperipheral interface 1203 may be used to connect at least one peripheral associated with I/O (Input/Output) to theprocessor 1201 and thememory 1202. In some embodiments, theprocessor 1201,memory 1202, andperipheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one or two of theprocessor 1201, thememory 1202 and theperipheral device interface 1203 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
TheRadio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. Theradio frequency circuit 1204 communicates with a communication network and other communication devices by electromagnetic signals. Theradio frequency circuit 1204 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, theradio frequency circuit 1204 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. Theradio frequency circuit 1204 may communicate with other terminals through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, therf circuit 1204 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
Thedisplay screen 1205 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When thedisplay screen 1205 is a touch display screen, thedisplay screen 1205 also has the ability to acquire touch signals on or over the surface of thedisplay screen 1205. The touch signal may be input to theprocessor 1201 as a control signal for processing. At this point, thedisplay 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, thedisplay 1205 may be one, providing the front panel of the terminal 1200; in other embodiments, thedisplay 1205 can be at least two, respectively disposed on different surfaces of the terminal 1200 or in a folded design; in still other embodiments, thedisplay 1205 may be a flexible display disposed on a curved surface or on a folded surface of theterminal 1200. Even further, thedisplay screen 1205 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. TheDisplay panel 1205 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
Camera assembly 1206 is used to capture images or video. Optionally,camera assembly 1206 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments,camera assembly 1206 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Theaudio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into theprocessor 1201 for processing or inputting the electric signals into theradio frequency circuit 1204 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided at different locations of terminal 1200. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from theprocessor 1201 or theradio frequency circuit 1204 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, theaudio circuitry 1207 may also include a headphone jack.
Apower supply 1208 is used to provide power to various components in terminal 1200. Thepower supply 1208 may be an alternating current, direct current, disposable battery, or rechargeable battery. Whenpower supply 1208 includes a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: acceleration sensor 1211, gyro sensor 1212, pressure sensor 1213, optical sensor 1214, and proximity sensor 1215.
The acceleration sensor 1211 can detect magnitudes of accelerations on three coordinate axes of the coordinate system established with theterminal 1200. For example, the acceleration sensor 1211 may be used to detect components of the gravitational acceleration in three coordinate axes. Theprocessor 1201 may control thetouch display 1205 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1211. The acceleration sensor 1211 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1212 may detect a body direction and a rotation angle of the terminal 1200, and the gyro sensor 1212 may collect a 3D motion of the user on the terminal 1200 in cooperation with the acceleration sensor 1211. Theprocessor 1201 can implement the following functions according to the data collected by the gyro sensor 1212: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1213 may be disposed on a side bezel of terminal 1200 and/or an underlying layer oftouch display 1205. When the pressure sensor 1213 is disposed on the side frame of the terminal 1200, the user's holding signal of the terminal 1200 can be detected, and theprocessor 1201 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1213. When the pressure sensor 1213 is disposed at a lower layer of thetouch display screen 1205, theprocessor 1201 controls the operability control on the UI interface according to the pressure operation of the user on thetouch display screen 1205. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
Optical sensor 1214 is used to collect ambient light intensity. In one embodiment, theprocessor 1201 may control the display brightness of thetouch display 1205 according to the ambient light intensity collected by the optical sensor 1214. Specifically, when the ambient light intensity is high, the display brightness of thetouch display panel 1205 is increased; when the ambient light intensity is low, the display brightness of thetouch display panel 1205 is turned down. In another embodiment, theprocessor 1201 may also dynamically adjust the shooting parameters of thecamera assembly 1206 according to the ambient light intensity collected by the optical sensor 1214.
A proximity sensor 1215, also known as a distance sensor, is typically provided on the front panel of theterminal 1200. The proximity sensor 1215 is used to collect the distance between the user and the front of theterminal 1200. In one embodiment, when the proximity sensor 1215 detects that the distance between the user and the front of the terminal 1200 is gradually reduced, theprocessor 1201 controls thetouch display 1205 to switch from the bright screen state to the dark screen state; when the proximity sensor 1215 detects that the distance between the user and the front of the terminal 1200 becomes gradually larger, thetouch display 1205 is controlled by theprocessor 1201 to switch from the rest state to the bright state.
Those skilled in the art will appreciate that the configuration shown in fig. 12 is not intended to be limiting of terminal 1200 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, which may be a computer readable storage medium contained in a memory of the above embodiments; or it may be a separate computer-readable storage medium not incorporated in the terminal. The computer readable storage medium has stored therein at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by the processor to implement the virtual trace display method of any of the above embodiments.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.