Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
In order to enrich sound when playing a behavior event and improve audio-visual experience of a user, an embodiment of the present invention provides a method for playing a behavior event, where the behavior event played by the method corresponds to at least two sound events, see fig. 1, and a flow of the method provided by this embodiment includes:
101: and determining at least two sound events corresponding to the behavior event to be played currently.
As an optional embodiment, before determining at least two sound events corresponding to the behavior event to be currently played, the method further includes:
determining at least two sound events corresponding to each behavior event in preset sound events, and storing the corresponding relation between each behavior event and the sound event;
determining at least two sound events corresponding to the behavior event to be played currently, including:
and determining at least two sound events corresponding to the behavior event to be played currently according to the stored corresponding relation between each behavior event and the sound event.
102: acquiring the states of the behavior events, wherein the playing time information of each sound event corresponding to the behavior events in different states in the sound track is different, and the playing time information at least comprises a playing point and a playing duration.
103: and determining the playing time information of each sound event according to the state of the behavior event, and playing each sound event according to the playing time information of each sound event when the current behavior event is played.
As an alternative embodiment, determining the playing time information of each sound event according to the state of the behavior event includes:
and determining a playing point and a playing time length corresponding to each sound event in the time for playing the behavior event according to the state of the behavior event, and determining the playing point and the playing time length corresponding to each sound event in the time for playing the behavior event as the playing time information of each sound event.
As an alternative embodiment, the status of the behavior event is the current playing frequency of the behavior event;
determining playing time information of each sound event according to the state of the behavior event, wherein the playing time information comprises:
and determining the playing time information of each sound event according to the current playing frequency of the behavior event.
As an alternative embodiment, before determining the playing time information of each sound event according to the current playing frequency of the behavior event, the method further includes:
pre-storing the corresponding relation between the playing frequency of the behavior event and the playing time information of the sound event;
determining playing time information of each sound event according to the current playing frequency of the behavior event, wherein the playing time information comprises:
and searching the playing time information of each sound event corresponding to the current playing frequency of the behavior event in the corresponding relation between the playing frequency of the stored behavior event and the playing time of the sound event.
According to the method provided by the embodiment of the invention, at least two sound events corresponding to the current behavior event to be played are determined, the state of the behavior event is obtained, the playing time information of each sound event is further determined according to the state of the behavior event, and each sound event is played according to the playing time information of each sound event when the current behavior event is played. Because the number of the sound events corresponding to the behavior event to be played at present is at least two, the sound effect when the behavior event is played is enriched.
With reference to the content of the foregoing embodiment, an embodiment of the present invention provides a method for playing a behavior event, where the behavior event may be a behavior event configured in a network application, such as a behavior event configured for a virtual character in the network application, and a specific behavior event is not limited in all embodiments of the present invention. In order to enrich the sound effect when playing the behavior event, the behavior event played by the method provided by the embodiment corresponds to at least two sound events. Referring to fig. 2, the method flow provided by this embodiment includes:
201: and determining at least two sound events corresponding to the behavior event to be played currently.
In order to enrich the sound effect of the behavior event, the method provided by the embodiment may preset at least two sound events when the network application is set, determine at least two sound events corresponding to each behavior event in the preset sound events, and store the corresponding relationship between each behavior event and the sound event. The number of preset sound events may be 10, 20, 50, and the like, and the number of preset sound events is not specifically limited in this embodiment.
The present embodiment is not particularly limited with respect to a manner of determining at least two sound events corresponding to each behavior event among the preset sound events. In specific implementation, at least two sound events corresponding to each behavior event can be determined in the preset behavior events according to the behavior content of each behavior event. For example, if the behavioral event is a horseshoe-holding gib, 4 sound events can be set for the behavioral event according to the content of the behavioral event: vocal events of horse screech, vocal events of jawing, vocal events of character screech, and vocal events of horseshoe. For another example, if the action event is a closed feather held by a horse, 3 sound events can be set for the action event according to the content of the action event: a horse-called sound event, a knife-and-chop sound event, and a character-called sound event.
The manner of storing the correspondence between each behavior event and the sound event includes, but is not limited to, storing the correspondence between each behavior event and the sound event in the network server. Regarding the form of storing the correspondence relationship between each behavior event and the sound event, the form of storing the correspondence relationship between each behavior event and the sound event in the form of a table includes, but is not limited to.
The correspondence between each behavior event and each sound event stored in the form of a table is described in detail in table 1.
TABLE 1
Further, after the preset correspondence between each behavior event and the sound event is stored, when at least two sound events corresponding to the behavior event to be played currently are determined, at least two sound events corresponding to the behavior event to be played currently can be determined according to the stored correspondence between each behavior event and the sound event. Taking table 1 as an example, if the behavior event to be played currently is behavior event 1, the sound event corresponding to behavior event 1 may be determined according to the stored correspondence between each behavior event and the sound event as follows: sound event a, sound event B, and sound event C.
202: the state of the behavioral event is obtained.
The state of the behavior event includes, but is not limited to, a current playing frequency of the behavior event, and the like, and the state of the behavior event is not specifically limited in this embodiment. Because the states of the behavior events are different, the forms of playing the behavior events are different, and the experience effects brought to the user by the different playing forms of the behavior events are also different, the method provided by the embodiment needs to acquire the states of the behavior events in order to improve the experience effects of the user.
Further, since the corresponding sound effects of the behavior events in different states should be different when playing, the playing time information of each sound event in the audio track corresponding to the behavior event in different states is different. The playing time information includes, but is not limited to, a playing point, a playing duration, and the like, and the playing time information is not specifically limited in this embodiment. Specifically, the playing point is a starting playing time point of a sound event corresponding to the behavior event that starts to be played within the time of playing the behavior event. And when the playing time of the behavior event reaches the playing point of the sound event corresponding to the behavior event, playing the sound event corresponding to the behavior event. The position of the play point of the sound event may be at different positions of 1/2, 1/3, etc. of the play duration of the behavior event, and the present embodiment does not specifically limit the position of the play point of the sound event within the play duration of the behavior event. The playing time of the sound event is the playing length of the sound event, and the playing time of the sound event may be 1 minute, 2 minutes, 3 minutes, and the like. Referring to the action event 1 in table 1, the corresponding play point and play duration of the sound event are shown in fig. 3, and it can be seen from fig. 3 that the state of the action event 1 is the state a and the state b. When the state of the action event 1 is the state a, the playing point of the sound event a in the track is 1/4 of the playing time length of the action event, and the playing time length is 1 second; the playing point of the sound event B in the audio track is 1/2 of the playing time length of the behavior event, and the playing time length is 2 seconds; the playing point of the sound event C in the audio track is 3/4 of the playing time length of the behavior event, and the playing time length is 2.5 seconds; when the state of the action event 1 is the state B, the playback time of the sound event a is 3 seconds at 1/2 where the playback point in the track is the action event playback time, the playback time of the sound event B is 1 second at 1/3 where the playback point in the track is the action event playback time, and the playback time of the sound event C is 0.5 seconds at 3/5 where the playback point in the track is the action event playback time.
203: and determining the playing time information of each sound event according to the state of the behavior event.
Because each sound event corresponding to the behavior event corresponds to a playing point in the behavior event, when the playing time for playing the behavior event reaches the playing point for playing the sound event corresponding to each behavior event, the sound event corresponding to each behavior event is played. Since the state of the action event determines the playing time information of each sound event corresponding to the action event, and the states of the action event are different, the playing time information of each sound event corresponding to the action event is also different, and therefore, when the playing time information of each sound event corresponding to the action event is determined, the determination can be performed according to the state of the action event. Specifically, the method for determining the playing time information of each sound event corresponding to the behavior event according to the state of the behavior event includes, but is not limited to: and determining a playing point and a playing time length corresponding to each sound event in the time for playing the behavior event according to the state of the behavior event, and determining the playing point and the playing time length corresponding to each sound event in the time for playing the behavior event as the playing time information of each sound event.
For the above-described processes, a detailed explanation will be given below with a specific example for the sake of understanding.
Setting the behavior event as behavior event 1, setting the state of behavior event 1 as states a and B, setting the sound events corresponding to behavior event 1 as sound event A and sound event B, setting the playing time of sound event A at 1/3 of the playing time of playing behavior event to be 10 seconds, and setting the playing time of sound event B at 2/3 of the playing time of playing behavior event to be 20 seconds. When the state of the behavior event is a state a, the playing time of the behavior event is 3 minutes, the playing point of the sound event A is determined to be the position of 1 minute when the playing time of the behavior event is 1 minute according to the state a of the behavior event, and the sound event A is played for 10 seconds when the playing time of the behavior event reaches 1 minute; determining that the playing point of the sound event B is the position of playing the behavior event within 2 minutes according to the state a of the behavior event, and playing the sound event B for 20 seconds when the time of playing the behavior event reaches 2 minutes; when the state of the behavior event is a state b, the playing time of the behavior event is 1 minute, the playing point of the sound event A is determined to be the position of 20 seconds when the playing time of the behavior event is 20 seconds, and the sound event A is played for 10 seconds when the playing time of the behavior event reaches 20 seconds; and determining that the playing point of the sound event B is the time for playing the behavior event at 40 seconds according to the state B of the behavior event, and playing the sound event B for 20 seconds when the time for playing the behavior event reaches 40 seconds.
Optionally, since the state of the behavior event includes, but is not limited to, the current playing frequency of the behavior event, when the state of the behavior event is the current playing frequency, the manner of determining the playing time information of each sound event according to the state of the behavior event includes, but is not limited to:
and determining the playing time information of each sound event according to the current playing frequency of the behavior event.
Further, in order to determine the playing time information of each sound event according to the current playing frequency of the behavior event, the method provided by this embodiment needs to preset and store the corresponding relationship between the playing frequency of the behavior event and the playing time information of the sound event before determining the playing time information of each sound event according to the current playing frequency of the behavior event.
The embodiment is not particularly limited as to the manner in which the correspondence relationship between the play frequency of the behavior event and the play time information of the sound event is set in advance. In specific implementation, the determination can be performed according to the proportional relationship between the playing frequency of the behavior event and the playing time information of the sound event. Since the playing time information includes not only the playing point but also the playing duration, the corresponding relationship between the playing frequency of the behavior event and the playing time information of the sound event is preset for the playing point and the playing duration in the playing time information, which will be described below.
Taking an action event as an action event 2 and a sound event corresponding to the action event 2 as a sound event C as an example, when a corresponding relationship between a playing frequency of the action event and a playing point of the sound event is preset, if it is known that the playing frequency of the action event is a and the playing point of the sound event C is 1/m of the playing duration of the action event, it can be determined that the playing frequency of the action event C is b and the playing time of the action event C is b/am.
Similarly, taking an action event as the action event 2 and a sound event corresponding to the action event 2 as the sound event C as an example, when the corresponding relationship between the playing frequency of the action event and the playing time length of the sound event is preset, if it is known that the playing frequency of the action event is a and the playing time length of the sound event C is t1, it can be determined that the playing time length of the action event C is t1b/a when the playing frequency of the action event C is b.
Regarding the way of storing the correspondence between the playing frequency of the behavior event and the playing time information of the sound event, the method includes, but is not limited to, storing the correspondence between the playing frequency of the behavior event and the playing time information of the sound event in the server. Regarding the form of storing the correspondence between the playing frequency of the behavior event and the playing time information of the sound event, the storage includes, but is not limited to, storing the correspondence between the playing frequency of the behavior event and the playing time information of the sound event in the form of a table.
Taking the action event a as an example, the table 2 may be specifically referred to for the corresponding relationship between the playing frequency of the action event stored in the form of a table and the playing time information of the sound event.
TABLE 2
Further, after storing the corresponding relationship between the playing frequency of the behavior event and the playing time information of the sound event, the method provided by this embodiment may determine the playing time information of each sound event according to the corresponding relationship between the stored playing frequency of the behavior event and the playing time information of the sound event. Specifically, determining the playing time information of each sound event according to the current playing frequency of the behavior event includes:
and searching the playing time information of each sound event of the behavior event corresponding to the current playing frequency of the behavior event in the corresponding relation between the playing frequency of the stored behavior event and the playing time information of the sound event. Taking table 2 as an example, if the current frequency of the behavior event a is a, finding out that the playing point of the sound event a corresponding to the current playing frequency of the behavior event is q and the playing time duration is t1 from the corresponding relationship between the playing frequency of the stored behavior event and the playing time information of the sound event; the playing point of the sound event B is w, the playing time length is t2(ii) a The sound event C has a playing point of e and a playing time of t3。
204: and when the current behavior event is played, playing each sound event according to the playing time information of each sound event.
Since the playing time information of each sound event is already determined according to the state of the behavior event instep 203, and each sound event includes the playing point and the playing duration of each sound event, this step plays each sound event according to the playing time information of each sound event when playing the current behavior event on the basis ofstep 203.
When the current behavior event is played, the process of playing each sound event according to the playing time information of each sound event includes, but is not limited to, playing each sound event according to the playing time of each sound event when the time of playing the current behavior event reaches the playing point in the playing time information of each sound event.
For example, if the current behavior event is behavior event 1, and the corresponding sound events are sound event a and sound event B, where the play time of the current behavior event is 5 minutes, the play point of sound event a is 1/5 of the play time of the play behavior event and the play time is 20 seconds, and the play point of sound event B is 3/5 of the play time of the play behavior event and the play time is 30 seconds, then sound event a is played for 20 seconds when the time of playing behavior event reaches 1 minute, and sound event B is played for 30 seconds when the time of playing behavior event reaches 2 minutes.
In the method provided by the embodiment of the invention, at least two sound events corresponding to the behavior event to be played currently are determined, the state of the behavior event is obtained, the playing time information of each sound event corresponding to the behavior event is further determined according to the state of the behavior event, and each sound event is played when the time for playing the current behavior event reaches the playing point of each sound event. Because the number of the sound events corresponding to the behavior event to be played currently is at least two, the sound when the behavior event is played is enriched.
Referring to fig. 4, an embodiment of the present invention provides a device for playing a behavior event, where the behavior event played by the device corresponds to at least two sound events, and the device includes:
a first determiningmodule 401, configured to determine at least two sound events corresponding to a behavior event to be currently played;
an obtainingmodule 402, configured to obtain states of behavior events, where playing time information of each sound event in a sound track corresponding to the behavior event in different states is different, where the playing time information at least includes a playing point and a playing duration;
a second determiningmodule 403, configured to determine playing time information of each sound event according to the state of the behavior event;
theplaying module 404 is configured to play each sound event according to the playing time information of each sound event when the current behavior event is played.
Referring to fig. 5, the apparatus further comprises:
a third determiningmodule 405, configured to determine at least two sound events corresponding to each behavior event among preset sound events;
afirst storage module 406, configured to store a corresponding relationship between each behavior event and a sound event;
the first determiningmodule 401 is configured to determine, according to the stored correspondence between each behavior event and a sound event, at least two sound events corresponding to the behavior event to be played currently.
As an alternative embodiment, the second determiningmodule 403 is configured to determine, according to the state of the behavior event, a playing point and a playing time length of each sound event corresponding to the time when the behavior event is played, and determine the playing point and the playing time length of each sound event corresponding to the time when the behavior event is played as the playing time information of each sound event.
As an alternative embodiment, the status of the behavior event is the current playing frequency of the behavior event;
a second determiningmodule 403, configured to determine playing time information of each sound event according to the current playing frequency of the behavior event.
Referring to fig. 6, the apparatus further comprises:
asecond storage module 407, configured to pre-store a corresponding relationship between the playing frequency of the behavior event and the playing time information of the sound event;
a second determiningmodule 403, configured to search, in the stored correspondence between the playing frequency of the behavior event and the playing time of the sound event, the playing time information of each sound event corresponding to the current playing frequency of the behavior event.
In summary, in the apparatus provided in the embodiment of the present invention, at least two sound events corresponding to a current behavior event to be played are determined, and a state of the behavior event is obtained, so that play time information of each sound event corresponding to the behavior event is determined according to the state of the behavior event, and each sound event is played when the time for playing the current behavior event reaches a play point of each sound event. Because the number of the sound events corresponding to the behavior event to be played currently is at least two, the sound when the behavior event is played is enriched.
Referring to fig. 7, a schematic structural diagram of a terminal according to an embodiment of the present invention is shown, where the terminal may be used to implement the method for playing a behavior event provided in the foregoing embodiment. Specifically, the method comprises the following steps:
the terminal 700 may include components such as an RF (Radio Frequency)circuit 110, amemory 120 including one or more computer-readable storage media, aninput unit 130, adisplay unit 140, asensor 150, anaudio circuit 160, a WiFi (Wireless Fidelity)module 170, aprocessor 180 including one or more processing cores, and apower supply 190. Those skilled in the art will appreciate that the terminal structure shown in fig. 7 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
theRF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information from a base station and then sends the received downlink information to the one ormore processors 180 for processing; in addition, data relating to uplink is transmitted to the base station. In general, theRF circuitry 110 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier), a duplexer, and the like. In addition, theRF circuitry 110 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), e-mail, SMS (short messaging Service), etc.
Thememory 120 may be used to store software programs and modules, and theprocessor 180 executes various functional applications and data processing by operating the software programs and modules stored in thememory 120. Thememory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal 700, and the like. Further, thememory 120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, thememory 120 may further include a memory controller to provide theprocessor 180 and theinput unit 130 with access to thememory 120.
Theinput unit 130 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, theinput unit 130 may include a touch-sensitive surface 131 as well asother input devices 132. The touch-sensitive surface 131, also referred to as a touch display screen or a touch pad, may collect touch operations by a user on or near the touch-sensitive surface 131 (e.g., operations by a user on or near the touch-sensitive surface 131 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touchsensitive surface 131 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to theprocessor 180, and can receive and execute commands sent by theprocessor 180. Additionally, the touch-sensitive surface 131 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch-sensitive surface 131, theinput unit 130 may also includeother input devices 132. In particular,other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Thedisplay unit 140 may be used to display information input by or provided to a user and various graphical user interfaces of the terminal 700, which may be made up of graphics, text, icons, video, and any combination thereof. TheDisplay unit 140 may include aDisplay panel 141, and optionally, theDisplay panel 141 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch-sensitive surface 131 may cover thedisplay panel 141, and when a touch operation is detected on or near the touch-sensitive surface 131, the touch operation is transmitted to theprocessor 180 to determine the type of the touch event, and then theprocessor 180 provides a corresponding visual output on thedisplay panel 141 according to the type of the touch event. Although in FIG. 7, touch-sensitive surface 131 anddisplay panel 141 are shown as two separate components to implement input and output functions, in some embodiments, touch-sensitive surface 131 may be integrated withdisplay panel 141 to implement input and output functions.
The terminal 700 can also include at least onesensor 150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of thedisplay panel 141 according to the brightness of ambient light, and a proximity sensor that may turn off thedisplay panel 141 and/or a backlight when the terminal 700 is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal 700, detailed descriptions thereof are omitted.
Audio circuitry 160,speaker 161, andmicrophone 162 may provide an audio interface between a user andterminal 700. Theaudio circuit 160 may transmit the electrical signal converted from the received audio data to thespeaker 161, and convert the electrical signal into a sound signal for output by thespeaker 161; on the other hand, themicrophone 162 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by theaudio circuit 160, and then outputs the audio data to theprocessor 180 for processing, and then to theRF circuit 110 to be transmitted to, for example, another terminal, or outputs the audio data to thememory 120 for further processing. Theaudio circuit 160 may also include an earbud jack to provide communication of a peripheral headset with the terminal 700.
WiFi belongs to a short-distance wireless transmission technology, and the terminal 700 can help a user send and receive e-mails, browse web pages, access streaming media, and the like through theWiFi module 170, and provides wireless broadband internet access for the user. Although fig. 7 shows theWiFi module 170, it is understood that it does not belong to the essential constitution of the terminal 700 and may be omitted entirely as needed within the scope not changing the essence of the invention.
Theprocessor 180 is a control center of the terminal 700, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the terminal 700 and processes data by operating or executing software programs and/or modules stored in thememory 120 and calling data stored in thememory 120, thereby performing overall monitoring of the mobile phone. Optionally,processor 180 may include one or more processing cores; optionally, theprocessor 180 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into theprocessor 180.
The terminal 700 also includes a power supply 190 (e.g., a battery) for powering the various components, which may preferably be logically coupled to theprocessor 180 via a power management system to manage charging, discharging, and power consumption management functions via the power management system. Thepower supply 190 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the terminal 700 may further include a camera, a bluetooth module, etc., which will not be described herein. In this embodiment, the display unit of the terminal 700 is a touch screen display, and the terminal 700 further includes a memory and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors. The one or more programs include instructions for:
determining at least two sound events corresponding to the behavior event to be played currently;
acquiring states of the behavior events, wherein the playing time information of each sound event corresponding to the behavior events in different states in the sound track is different, and the playing time information at least comprises a playing point and a playing duration;
and determining the playing time information of each sound event according to the state of the behavior event, and playing each sound event according to the playing time information of each sound event when the current behavior event is played.
Assuming that the above is the first possible implementation manner, in a second possible implementation manner provided on the basis of the first possible implementation manner, the memory of the terminal further includes instructions for performing the following operations: before determining at least two sound events corresponding to the behavior event to be played at present, the method further includes:
determining at least two sound events corresponding to each behavior event in preset sound events, and storing the corresponding relation between each behavior event and the sound event;
determining at least two sound events corresponding to the behavior event to be played currently, including:
and determining at least two sound events corresponding to the behavior event to be played currently according to the stored corresponding relation between each behavior event and the sound event.
In a third possible implementation manner provided on the basis of the first possible implementation manner or the second possible implementation manner, the memory of the terminal further includes instructions for performing the following operations: determining playing time information of each sound event according to the state of the behavior event, wherein the playing time information comprises:
and determining a playing point and a playing time length corresponding to each sound event in the time for playing the behavior event according to the state of the behavior event, and determining the playing point and the playing time length corresponding to each sound event in the time for playing the behavior event as the playing time information of each sound event.
In a fourth possible implementation manner provided on the basis of the first to third possible implementation manners, the memory of the terminal further includes instructions for performing the following operations: the state of the behavior event is the current playing frequency of the behavior event;
determining playing time information of each sound event according to the state of the behavior event, wherein the playing time information comprises:
and determining the playing time information of each sound event according to the current playing frequency of the behavior event.
In a fifth possible implementation manner provided on the basis of the first to fourth possible implementation manners, the memory of the terminal further includes instructions for performing the following operations: before determining the playing time information of each sound event according to the current playing frequency of the behavior event, the method further comprises the following steps:
pre-storing the corresponding relation between the playing frequency of the behavior event and the playing time information of the sound event;
determining playing time information of each sound event according to the current playing frequency of the behavior event, wherein the playing time information comprises:
and searching the playing time information of each sound event corresponding to the current playing frequency of the behavior event in the corresponding relation between the playing frequency of the stored behavior event and the playing time of the sound event.
The terminal provided by the embodiment of the invention determines at least two sound events corresponding to the behavior event to be played currently, acquires the states of the behavior events, further determines the playing time information of each sound event corresponding to the behavior event according to the states of the behavior events, and plays each sound event according to the playing time of each sound event when the current behavior event is played. Because the number of the sound events corresponding to the behavior event to be played at present is at least two, the sound effect when the behavior event is played is enriched.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium may be a computer-readable storage medium contained in the memory in the foregoing embodiment; or it may be a separate computer-readable storage medium not incorporated in the terminal. The computer-readable storage medium stores one or more programs, the one or more programs being used by one or more processors to perform a method of playing a behavioral event, the method comprising:
determining at least two sound events corresponding to the behavior event to be played currently;
acquiring states of the behavior events, wherein the playing time information of each sound event corresponding to the behavior events in different states in the sound track is different, and the playing time information at least comprises a playing point and a playing duration;
and determining the playing time information of each sound event according to the state of the behavior event, and playing each sound event according to the playing time information of each sound event when the current behavior event is played.
Assuming that the above is the first possible implementation manner, in a second possible implementation manner provided on the basis of the first possible implementation manner, the memory of the terminal further includes instructions for performing the following operations: before determining at least two sound events corresponding to the behavior event to be played at present, the method further includes:
determining at least two sound events corresponding to each behavior event in preset sound events, and storing the corresponding relation between each behavior event and the sound event;
determining at least two sound events corresponding to the behavior event to be played currently, including:
and determining at least two sound events corresponding to the behavior event to be played currently according to the stored corresponding relation between each behavior event and the sound event.
In a third possible implementation manner provided on the basis of the first possible implementation manner or the second possible implementation manner, the memory of the terminal further includes instructions for performing the following operations: determining playing time information of each sound event according to the state of the behavior event, wherein the playing time information comprises:
and determining a playing point and a playing time length corresponding to each sound event in the time for playing the behavior event according to the state of the behavior event, and determining the playing point and the playing time length corresponding to each sound event in the time for playing the behavior event as the playing time information of each sound event.
In a fourth possible implementation manner provided on the basis of the first to third possible implementation manners, the memory of the terminal further includes instructions for performing the following operations: the state of the behavior event is the current playing frequency of the behavior event;
determining playing time information of each sound event according to the state of the behavior event, wherein the playing time information comprises:
and determining the playing time information of each sound event according to the current playing frequency of the behavior event.
In a fifth possible implementation manner provided on the basis of the first to fourth possible implementation manners, the memory of the terminal further includes instructions for performing the following operations: before determining the playing time information of each sound event according to the current playing frequency of the behavior event, the method further comprises the following steps:
pre-storing the corresponding relation between the playing frequency of the behavior event and the playing time information of the sound event;
determining playing time information of each sound event according to the current playing frequency of the behavior event, wherein the playing time information comprises:
and searching the playing time information of each sound event corresponding to the current playing frequency of the behavior event in the corresponding relation between the playing frequency of the stored behavior event and the playing time of the sound event.
The computer-readable storage medium provided in the embodiment of the present invention determines at least two sound events corresponding to a current behavior event to be played, and obtains a state of the behavior event, and further determines playing time information of each sound event corresponding to the behavior event according to the state of the behavior event, and plays each sound event according to the playing time of each sound event when the current behavior event is played. Because the number of the sound events corresponding to the behavior event to be played at present is at least two, the sound effect when the behavior event is played is enriched.
The embodiment of the invention provides a graphical user interface, which is used on a playing and displaying terminal of a behavior event, wherein the confirmation terminal for executing operation comprises a touch screen display, a memory and one or more processors for executing one or more programs; the graphical user interface includes:
determining at least two sound events corresponding to the behavior event to be played currently;
acquiring states of the behavior events, wherein the playing time information of each sound event corresponding to the behavior events in different states in the sound track is different, and the playing time information at least comprises a playing point and a playing duration;
and determining the playing time information of each sound event according to the state of the behavior event, and playing each sound event according to the playing time information of each sound event when the current behavior event is played.
The graphical user interface provided by the embodiment of the invention determines at least two sound events corresponding to the behavior event to be played currently, acquires the states of the behavior events, further determines the playing time information of each sound event corresponding to the behavior event according to the states of the behavior events, and plays each sound event according to the playing time of each sound event when the current behavior event is played. Because the number of the sound events corresponding to the behavior event to be played at present is at least two, the sound effect when the behavior event is played is enriched.
It should be noted that: in the playing apparatus for behavioral events provided in the foregoing embodiment, only the division of the functional modules is exemplified when playing a behavioral event, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the playing apparatus for behavioral events is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the embodiment of the playing apparatus for a behavior event and the embodiment of the playing method for a behavior event provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.