Movatterモバイル変換


[0]ホーム

URL:


CN105159655B - Behavior event playing method and device - Google Patents

Behavior event playing method and device
Download PDF

Info

Publication number
CN105159655B
CN105159655BCN201410229383.XACN201410229383ACN105159655BCN 105159655 BCN105159655 BCN 105159655BCN 201410229383 ACN201410229383 ACN 201410229383ACN 105159655 BCN105159655 BCN 105159655B
Authority
CN
China
Prior art keywords
event
playing
sound
behavior
behavior event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410229383.XA
Other languages
Chinese (zh)
Other versions
CN105159655A (en
Inventor
陈小荣
韦龙凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co LtdfiledCriticalTencent Technology Shenzhen Co Ltd
Priority to CN201410229383.XApriorityCriticalpatent/CN105159655B/en
Priority to SG11201605960WAprioritypatent/SG11201605960WA/en
Priority to MYPI2016703177Aprioritypatent/MY196865A/en
Priority to PCT/CN2015/080100prioritypatent/WO2015184959A2/en
Publication of CN105159655ApublicationCriticalpatent/CN105159655A/en
Application grantedgrantedCritical
Publication of CN105159655BpublicationCriticalpatent/CN105159655B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a method and a device for playing a behavior event, and belongs to the technical field of computers. The method comprises the following steps: determining at least two sound events corresponding to the behavior event to be played currently; acquiring states of the behavior events, wherein the playing time information of each sound event corresponding to the behavior events in different states in the sound track is different, and the playing time information at least comprises a playing point and a playing duration; and determining the playing time information of each sound event according to the state of the behavior event, and playing each sound event according to the playing time information of each sound event when the current behavior event is played. According to the method and the device, after at least two sound events corresponding to the behavior event to be played currently are determined, the playing time information of each sound event is determined according to the state of the behavior event, and each sound event is played according to the playing time information of each sound event when the current behavior event is played, so that the sound effect when the behavior event is played is enriched.

Description

Behavior event playing method and device
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for playing a behavior event.
Background
With the development of computer technology, the number and the types of online game applications are more and more. If the experience effect brought to the user by the online game application is not good, the user quantity is reduced. Since the user quantity is an important index for measuring the performance of the online game application, and the quality of the online game application playing behavior event affects the user quantity of the online game application, how to play the behavior event becomes a problem that technicians in the field pay attention to in order to increase the user quantity of the online game application.
When playing behavior events, the related technology firstly sets a sound event for each behavior event of each object of the online game application, wherein each sound event comprises a sound effect; and then when the behavior event of a certain object in the network game application is played, playing the corresponding sound event.
In the process of implementing the invention, the inventor finds that the related art has at least the following problems:
in the related art, when a behavior event is played, since each behavior event of each object corresponds to one sound event, the sound effect when the behavior event is played is relatively single.
Disclosure of Invention
In order to solve the problems in the prior art, embodiments of the present invention provide a method and an apparatus for playing a behavior event. The technical scheme is as follows:
in one aspect, a method for playing behavior events is provided, where the behavior events played by the method correspond to at least two sound events, and the method includes:
determining at least two sound events corresponding to the behavior event to be played currently;
acquiring states of the behavior events, wherein the playing time information of each sound event corresponding to the behavior events in different states in a sound track is different, and the playing time information at least comprises a playing point and a playing duration;
and determining the playing time information of each sound event according to the state of the behavior event, and playing each sound event according to the playing time information of each sound event when the current behavior event is played.
In another aspect, an apparatus for playing behavior events, where the behavior events played by the apparatus correspond to at least two sound events, the apparatus includes:
the first determining module is used for determining at least two sound events corresponding to the behavior event to be played currently;
the acquisition module is used for acquiring the states of the behavior events, and the playing time information of each sound event corresponding to the behavior events in different states in the sound track is different, wherein the playing time information at least comprises a playing point and a playing duration;
the second determining module is used for determining the playing time information of each sound event according to the state of the behavior event;
and the playing module is used for playing each sound event according to the playing time information of each sound event when the current behavior event is played.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
the method comprises the steps of determining at least two sound events corresponding to a current behavior event to be played, obtaining the states of the behavior events, further determining the playing time information of each sound event according to the states of the behavior events, and playing each sound event according to the playing time information of each sound event when the current behavior event is played. Because the number of the sound events corresponding to the behavior event to be played at present is at least two, the sound effect when the behavior event is played is enriched.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of a method for playing behavior events according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for playing behavior events according to another embodiment of the present invention;
FIG. 3 is a schematic diagram of an action event and a corresponding sound event and playing duration according to another embodiment of the present invention;
fig. 4 is a schematic structural diagram of a playing apparatus for behavior events according to another embodiment of the present invention;
fig. 5 is a schematic structural diagram of a playing apparatus for behavior events according to another embodiment of the present invention;
fig. 6 is a schematic structural diagram of a playing apparatus for behavior events according to another embodiment of the present invention;
fig. 7 is a schematic structural diagram of a terminal according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
In order to enrich sound when playing a behavior event and improve audio-visual experience of a user, an embodiment of the present invention provides a method for playing a behavior event, where the behavior event played by the method corresponds to at least two sound events, see fig. 1, and a flow of the method provided by this embodiment includes:
101: and determining at least two sound events corresponding to the behavior event to be played currently.
As an optional embodiment, before determining at least two sound events corresponding to the behavior event to be currently played, the method further includes:
determining at least two sound events corresponding to each behavior event in preset sound events, and storing the corresponding relation between each behavior event and the sound event;
determining at least two sound events corresponding to the behavior event to be played currently, including:
and determining at least two sound events corresponding to the behavior event to be played currently according to the stored corresponding relation between each behavior event and the sound event.
102: acquiring the states of the behavior events, wherein the playing time information of each sound event corresponding to the behavior events in different states in the sound track is different, and the playing time information at least comprises a playing point and a playing duration.
103: and determining the playing time information of each sound event according to the state of the behavior event, and playing each sound event according to the playing time information of each sound event when the current behavior event is played.
As an alternative embodiment, determining the playing time information of each sound event according to the state of the behavior event includes:
and determining a playing point and a playing time length corresponding to each sound event in the time for playing the behavior event according to the state of the behavior event, and determining the playing point and the playing time length corresponding to each sound event in the time for playing the behavior event as the playing time information of each sound event.
As an alternative embodiment, the status of the behavior event is the current playing frequency of the behavior event;
determining playing time information of each sound event according to the state of the behavior event, wherein the playing time information comprises:
and determining the playing time information of each sound event according to the current playing frequency of the behavior event.
As an alternative embodiment, before determining the playing time information of each sound event according to the current playing frequency of the behavior event, the method further includes:
pre-storing the corresponding relation between the playing frequency of the behavior event and the playing time information of the sound event;
determining playing time information of each sound event according to the current playing frequency of the behavior event, wherein the playing time information comprises:
and searching the playing time information of each sound event corresponding to the current playing frequency of the behavior event in the corresponding relation between the playing frequency of the stored behavior event and the playing time of the sound event.
According to the method provided by the embodiment of the invention, at least two sound events corresponding to the current behavior event to be played are determined, the state of the behavior event is obtained, the playing time information of each sound event is further determined according to the state of the behavior event, and each sound event is played according to the playing time information of each sound event when the current behavior event is played. Because the number of the sound events corresponding to the behavior event to be played at present is at least two, the sound effect when the behavior event is played is enriched.
With reference to the content of the foregoing embodiment, an embodiment of the present invention provides a method for playing a behavior event, where the behavior event may be a behavior event configured in a network application, such as a behavior event configured for a virtual character in the network application, and a specific behavior event is not limited in all embodiments of the present invention. In order to enrich the sound effect when playing the behavior event, the behavior event played by the method provided by the embodiment corresponds to at least two sound events. Referring to fig. 2, the method flow provided by this embodiment includes:
201: and determining at least two sound events corresponding to the behavior event to be played currently.
In order to enrich the sound effect of the behavior event, the method provided by the embodiment may preset at least two sound events when the network application is set, determine at least two sound events corresponding to each behavior event in the preset sound events, and store the corresponding relationship between each behavior event and the sound event. The number of preset sound events may be 10, 20, 50, and the like, and the number of preset sound events is not specifically limited in this embodiment.
The present embodiment is not particularly limited with respect to a manner of determining at least two sound events corresponding to each behavior event among the preset sound events. In specific implementation, at least two sound events corresponding to each behavior event can be determined in the preset behavior events according to the behavior content of each behavior event. For example, if the behavioral event is a horseshoe-holding gib, 4 sound events can be set for the behavioral event according to the content of the behavioral event: vocal events of horse screech, vocal events of jawing, vocal events of character screech, and vocal events of horseshoe. For another example, if the action event is a closed feather held by a horse, 3 sound events can be set for the action event according to the content of the action event: a horse-called sound event, a knife-and-chop sound event, and a character-called sound event.
The manner of storing the correspondence between each behavior event and the sound event includes, but is not limited to, storing the correspondence between each behavior event and the sound event in the network server. Regarding the form of storing the correspondence relationship between each behavior event and the sound event, the form of storing the correspondence relationship between each behavior event and the sound event in the form of a table includes, but is not limited to.
The correspondence between each behavior event and each sound event stored in the form of a table is described in detail in table 1.
TABLE 1
Figure BDA0000512075000000051
Figure BDA0000512075000000061
Further, after the preset correspondence between each behavior event and the sound event is stored, when at least two sound events corresponding to the behavior event to be played currently are determined, at least two sound events corresponding to the behavior event to be played currently can be determined according to the stored correspondence between each behavior event and the sound event. Taking table 1 as an example, if the behavior event to be played currently is behavior event 1, the sound event corresponding to behavior event 1 may be determined according to the stored correspondence between each behavior event and the sound event as follows: sound event a, sound event B, and sound event C.
202: the state of the behavioral event is obtained.
The state of the behavior event includes, but is not limited to, a current playing frequency of the behavior event, and the like, and the state of the behavior event is not specifically limited in this embodiment. Because the states of the behavior events are different, the forms of playing the behavior events are different, and the experience effects brought to the user by the different playing forms of the behavior events are also different, the method provided by the embodiment needs to acquire the states of the behavior events in order to improve the experience effects of the user.
Further, since the corresponding sound effects of the behavior events in different states should be different when playing, the playing time information of each sound event in the audio track corresponding to the behavior event in different states is different. The playing time information includes, but is not limited to, a playing point, a playing duration, and the like, and the playing time information is not specifically limited in this embodiment. Specifically, the playing point is a starting playing time point of a sound event corresponding to the behavior event that starts to be played within the time of playing the behavior event. And when the playing time of the behavior event reaches the playing point of the sound event corresponding to the behavior event, playing the sound event corresponding to the behavior event. The position of the play point of the sound event may be at different positions of 1/2, 1/3, etc. of the play duration of the behavior event, and the present embodiment does not specifically limit the position of the play point of the sound event within the play duration of the behavior event. The playing time of the sound event is the playing length of the sound event, and the playing time of the sound event may be 1 minute, 2 minutes, 3 minutes, and the like. Referring to the action event 1 in table 1, the corresponding play point and play duration of the sound event are shown in fig. 3, and it can be seen from fig. 3 that the state of the action event 1 is the state a and the state b. When the state of the action event 1 is the state a, the playing point of the sound event a in the track is 1/4 of the playing time length of the action event, and the playing time length is 1 second; the playing point of the sound event B in the audio track is 1/2 of the playing time length of the behavior event, and the playing time length is 2 seconds; the playing point of the sound event C in the audio track is 3/4 of the playing time length of the behavior event, and the playing time length is 2.5 seconds; when the state of the action event 1 is the state B, the playback time of the sound event a is 3 seconds at 1/2 where the playback point in the track is the action event playback time, the playback time of the sound event B is 1 second at 1/3 where the playback point in the track is the action event playback time, and the playback time of the sound event C is 0.5 seconds at 3/5 where the playback point in the track is the action event playback time.
203: and determining the playing time information of each sound event according to the state of the behavior event.
Because each sound event corresponding to the behavior event corresponds to a playing point in the behavior event, when the playing time for playing the behavior event reaches the playing point for playing the sound event corresponding to each behavior event, the sound event corresponding to each behavior event is played. Since the state of the action event determines the playing time information of each sound event corresponding to the action event, and the states of the action event are different, the playing time information of each sound event corresponding to the action event is also different, and therefore, when the playing time information of each sound event corresponding to the action event is determined, the determination can be performed according to the state of the action event. Specifically, the method for determining the playing time information of each sound event corresponding to the behavior event according to the state of the behavior event includes, but is not limited to: and determining a playing point and a playing time length corresponding to each sound event in the time for playing the behavior event according to the state of the behavior event, and determining the playing point and the playing time length corresponding to each sound event in the time for playing the behavior event as the playing time information of each sound event.
For the above-described processes, a detailed explanation will be given below with a specific example for the sake of understanding.
Setting the behavior event as behavior event 1, setting the state of behavior event 1 as states a and B, setting the sound events corresponding to behavior event 1 as sound event A and sound event B, setting the playing time of sound event A at 1/3 of the playing time of playing behavior event to be 10 seconds, and setting the playing time of sound event B at 2/3 of the playing time of playing behavior event to be 20 seconds. When the state of the behavior event is a state a, the playing time of the behavior event is 3 minutes, the playing point of the sound event A is determined to be the position of 1 minute when the playing time of the behavior event is 1 minute according to the state a of the behavior event, and the sound event A is played for 10 seconds when the playing time of the behavior event reaches 1 minute; determining that the playing point of the sound event B is the position of playing the behavior event within 2 minutes according to the state a of the behavior event, and playing the sound event B for 20 seconds when the time of playing the behavior event reaches 2 minutes; when the state of the behavior event is a state b, the playing time of the behavior event is 1 minute, the playing point of the sound event A is determined to be the position of 20 seconds when the playing time of the behavior event is 20 seconds, and the sound event A is played for 10 seconds when the playing time of the behavior event reaches 20 seconds; and determining that the playing point of the sound event B is the time for playing the behavior event at 40 seconds according to the state B of the behavior event, and playing the sound event B for 20 seconds when the time for playing the behavior event reaches 40 seconds.
Optionally, since the state of the behavior event includes, but is not limited to, the current playing frequency of the behavior event, when the state of the behavior event is the current playing frequency, the manner of determining the playing time information of each sound event according to the state of the behavior event includes, but is not limited to:
and determining the playing time information of each sound event according to the current playing frequency of the behavior event.
Further, in order to determine the playing time information of each sound event according to the current playing frequency of the behavior event, the method provided by this embodiment needs to preset and store the corresponding relationship between the playing frequency of the behavior event and the playing time information of the sound event before determining the playing time information of each sound event according to the current playing frequency of the behavior event.
The embodiment is not particularly limited as to the manner in which the correspondence relationship between the play frequency of the behavior event and the play time information of the sound event is set in advance. In specific implementation, the determination can be performed according to the proportional relationship between the playing frequency of the behavior event and the playing time information of the sound event. Since the playing time information includes not only the playing point but also the playing duration, the corresponding relationship between the playing frequency of the behavior event and the playing time information of the sound event is preset for the playing point and the playing duration in the playing time information, which will be described below.
Taking an action event as an action event 2 and a sound event corresponding to the action event 2 as a sound event C as an example, when a corresponding relationship between a playing frequency of the action event and a playing point of the sound event is preset, if it is known that the playing frequency of the action event is a and the playing point of the sound event C is 1/m of the playing duration of the action event, it can be determined that the playing frequency of the action event C is b and the playing time of the action event C is b/am.
Similarly, taking an action event as the action event 2 and a sound event corresponding to the action event 2 as the sound event C as an example, when the corresponding relationship between the playing frequency of the action event and the playing time length of the sound event is preset, if it is known that the playing frequency of the action event is a and the playing time length of the sound event C is t1, it can be determined that the playing time length of the action event C is t1b/a when the playing frequency of the action event C is b.
Regarding the way of storing the correspondence between the playing frequency of the behavior event and the playing time information of the sound event, the method includes, but is not limited to, storing the correspondence between the playing frequency of the behavior event and the playing time information of the sound event in the server. Regarding the form of storing the correspondence between the playing frequency of the behavior event and the playing time information of the sound event, the storage includes, but is not limited to, storing the correspondence between the playing frequency of the behavior event and the playing time information of the sound event in the form of a table.
Taking the action event a as an example, the table 2 may be specifically referred to for the corresponding relationship between the playing frequency of the action event stored in the form of a table and the playing time information of the sound event.
TABLE 2
Figure BDA0000512075000000091
Further, after storing the corresponding relationship between the playing frequency of the behavior event and the playing time information of the sound event, the method provided by this embodiment may determine the playing time information of each sound event according to the corresponding relationship between the stored playing frequency of the behavior event and the playing time information of the sound event. Specifically, determining the playing time information of each sound event according to the current playing frequency of the behavior event includes:
and searching the playing time information of each sound event of the behavior event corresponding to the current playing frequency of the behavior event in the corresponding relation between the playing frequency of the stored behavior event and the playing time information of the sound event. Taking table 2 as an example, if the current frequency of the behavior event a is a, finding out that the playing point of the sound event a corresponding to the current playing frequency of the behavior event is q and the playing time duration is t1 from the corresponding relationship between the playing frequency of the stored behavior event and the playing time information of the sound event; the playing point of the sound event B is w, the playing time length is t2(ii) a The sound event C has a playing point of e and a playing time of t3
204: and when the current behavior event is played, playing each sound event according to the playing time information of each sound event.
Since the playing time information of each sound event is already determined according to the state of the behavior event instep 203, and each sound event includes the playing point and the playing duration of each sound event, this step plays each sound event according to the playing time information of each sound event when playing the current behavior event on the basis ofstep 203.
When the current behavior event is played, the process of playing each sound event according to the playing time information of each sound event includes, but is not limited to, playing each sound event according to the playing time of each sound event when the time of playing the current behavior event reaches the playing point in the playing time information of each sound event.
For example, if the current behavior event is behavior event 1, and the corresponding sound events are sound event a and sound event B, where the play time of the current behavior event is 5 minutes, the play point of sound event a is 1/5 of the play time of the play behavior event and the play time is 20 seconds, and the play point of sound event B is 3/5 of the play time of the play behavior event and the play time is 30 seconds, then sound event a is played for 20 seconds when the time of playing behavior event reaches 1 minute, and sound event B is played for 30 seconds when the time of playing behavior event reaches 2 minutes.
In the method provided by the embodiment of the invention, at least two sound events corresponding to the behavior event to be played currently are determined, the state of the behavior event is obtained, the playing time information of each sound event corresponding to the behavior event is further determined according to the state of the behavior event, and each sound event is played when the time for playing the current behavior event reaches the playing point of each sound event. Because the number of the sound events corresponding to the behavior event to be played currently is at least two, the sound when the behavior event is played is enriched.
Referring to fig. 4, an embodiment of the present invention provides a device for playing a behavior event, where the behavior event played by the device corresponds to at least two sound events, and the device includes:
a first determiningmodule 401, configured to determine at least two sound events corresponding to a behavior event to be currently played;
an obtainingmodule 402, configured to obtain states of behavior events, where playing time information of each sound event in a sound track corresponding to the behavior event in different states is different, where the playing time information at least includes a playing point and a playing duration;
a second determiningmodule 403, configured to determine playing time information of each sound event according to the state of the behavior event;
theplaying module 404 is configured to play each sound event according to the playing time information of each sound event when the current behavior event is played.
Referring to fig. 5, the apparatus further comprises:
a third determiningmodule 405, configured to determine at least two sound events corresponding to each behavior event among preset sound events;
afirst storage module 406, configured to store a corresponding relationship between each behavior event and a sound event;
the first determiningmodule 401 is configured to determine, according to the stored correspondence between each behavior event and a sound event, at least two sound events corresponding to the behavior event to be played currently.
As an alternative embodiment, the second determiningmodule 403 is configured to determine, according to the state of the behavior event, a playing point and a playing time length of each sound event corresponding to the time when the behavior event is played, and determine the playing point and the playing time length of each sound event corresponding to the time when the behavior event is played as the playing time information of each sound event.
As an alternative embodiment, the status of the behavior event is the current playing frequency of the behavior event;
a second determiningmodule 403, configured to determine playing time information of each sound event according to the current playing frequency of the behavior event.
Referring to fig. 6, the apparatus further comprises:
asecond storage module 407, configured to pre-store a corresponding relationship between the playing frequency of the behavior event and the playing time information of the sound event;
a second determiningmodule 403, configured to search, in the stored correspondence between the playing frequency of the behavior event and the playing time of the sound event, the playing time information of each sound event corresponding to the current playing frequency of the behavior event.
In summary, in the apparatus provided in the embodiment of the present invention, at least two sound events corresponding to a current behavior event to be played are determined, and a state of the behavior event is obtained, so that play time information of each sound event corresponding to the behavior event is determined according to the state of the behavior event, and each sound event is played when the time for playing the current behavior event reaches a play point of each sound event. Because the number of the sound events corresponding to the behavior event to be played currently is at least two, the sound when the behavior event is played is enriched.
Referring to fig. 7, a schematic structural diagram of a terminal according to an embodiment of the present invention is shown, where the terminal may be used to implement the method for playing a behavior event provided in the foregoing embodiment. Specifically, the method comprises the following steps:
the terminal 700 may include components such as an RF (Radio Frequency)circuit 110, amemory 120 including one or more computer-readable storage media, aninput unit 130, adisplay unit 140, asensor 150, anaudio circuit 160, a WiFi (Wireless Fidelity)module 170, aprocessor 180 including one or more processing cores, and apower supply 190. Those skilled in the art will appreciate that the terminal structure shown in fig. 7 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
theRF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information from a base station and then sends the received downlink information to the one ormore processors 180 for processing; in addition, data relating to uplink is transmitted to the base station. In general, theRF circuitry 110 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier), a duplexer, and the like. In addition, theRF circuitry 110 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), e-mail, SMS (short messaging Service), etc.
Thememory 120 may be used to store software programs and modules, and theprocessor 180 executes various functional applications and data processing by operating the software programs and modules stored in thememory 120. Thememory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal 700, and the like. Further, thememory 120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, thememory 120 may further include a memory controller to provide theprocessor 180 and theinput unit 130 with access to thememory 120.
Theinput unit 130 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, theinput unit 130 may include a touch-sensitive surface 131 as well asother input devices 132. The touch-sensitive surface 131, also referred to as a touch display screen or a touch pad, may collect touch operations by a user on or near the touch-sensitive surface 131 (e.g., operations by a user on or near the touch-sensitive surface 131 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touchsensitive surface 131 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to theprocessor 180, and can receive and execute commands sent by theprocessor 180. Additionally, the touch-sensitive surface 131 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch-sensitive surface 131, theinput unit 130 may also includeother input devices 132. In particular,other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Thedisplay unit 140 may be used to display information input by or provided to a user and various graphical user interfaces of the terminal 700, which may be made up of graphics, text, icons, video, and any combination thereof. TheDisplay unit 140 may include aDisplay panel 141, and optionally, theDisplay panel 141 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch-sensitive surface 131 may cover thedisplay panel 141, and when a touch operation is detected on or near the touch-sensitive surface 131, the touch operation is transmitted to theprocessor 180 to determine the type of the touch event, and then theprocessor 180 provides a corresponding visual output on thedisplay panel 141 according to the type of the touch event. Although in FIG. 7, touch-sensitive surface 131 anddisplay panel 141 are shown as two separate components to implement input and output functions, in some embodiments, touch-sensitive surface 131 may be integrated withdisplay panel 141 to implement input and output functions.
The terminal 700 can also include at least onesensor 150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of thedisplay panel 141 according to the brightness of ambient light, and a proximity sensor that may turn off thedisplay panel 141 and/or a backlight when the terminal 700 is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal 700, detailed descriptions thereof are omitted.
Audio circuitry 160,speaker 161, andmicrophone 162 may provide an audio interface between a user andterminal 700. Theaudio circuit 160 may transmit the electrical signal converted from the received audio data to thespeaker 161, and convert the electrical signal into a sound signal for output by thespeaker 161; on the other hand, themicrophone 162 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by theaudio circuit 160, and then outputs the audio data to theprocessor 180 for processing, and then to theRF circuit 110 to be transmitted to, for example, another terminal, or outputs the audio data to thememory 120 for further processing. Theaudio circuit 160 may also include an earbud jack to provide communication of a peripheral headset with the terminal 700.
WiFi belongs to a short-distance wireless transmission technology, and the terminal 700 can help a user send and receive e-mails, browse web pages, access streaming media, and the like through theWiFi module 170, and provides wireless broadband internet access for the user. Although fig. 7 shows theWiFi module 170, it is understood that it does not belong to the essential constitution of the terminal 700 and may be omitted entirely as needed within the scope not changing the essence of the invention.
Theprocessor 180 is a control center of the terminal 700, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the terminal 700 and processes data by operating or executing software programs and/or modules stored in thememory 120 and calling data stored in thememory 120, thereby performing overall monitoring of the mobile phone. Optionally,processor 180 may include one or more processing cores; optionally, theprocessor 180 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into theprocessor 180.
The terminal 700 also includes a power supply 190 (e.g., a battery) for powering the various components, which may preferably be logically coupled to theprocessor 180 via a power management system to manage charging, discharging, and power consumption management functions via the power management system. Thepower supply 190 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the terminal 700 may further include a camera, a bluetooth module, etc., which will not be described herein. In this embodiment, the display unit of the terminal 700 is a touch screen display, and the terminal 700 further includes a memory and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors. The one or more programs include instructions for:
determining at least two sound events corresponding to the behavior event to be played currently;
acquiring states of the behavior events, wherein the playing time information of each sound event corresponding to the behavior events in different states in the sound track is different, and the playing time information at least comprises a playing point and a playing duration;
and determining the playing time information of each sound event according to the state of the behavior event, and playing each sound event according to the playing time information of each sound event when the current behavior event is played.
Assuming that the above is the first possible implementation manner, in a second possible implementation manner provided on the basis of the first possible implementation manner, the memory of the terminal further includes instructions for performing the following operations: before determining at least two sound events corresponding to the behavior event to be played at present, the method further includes:
determining at least two sound events corresponding to each behavior event in preset sound events, and storing the corresponding relation between each behavior event and the sound event;
determining at least two sound events corresponding to the behavior event to be played currently, including:
and determining at least two sound events corresponding to the behavior event to be played currently according to the stored corresponding relation between each behavior event and the sound event.
In a third possible implementation manner provided on the basis of the first possible implementation manner or the second possible implementation manner, the memory of the terminal further includes instructions for performing the following operations: determining playing time information of each sound event according to the state of the behavior event, wherein the playing time information comprises:
and determining a playing point and a playing time length corresponding to each sound event in the time for playing the behavior event according to the state of the behavior event, and determining the playing point and the playing time length corresponding to each sound event in the time for playing the behavior event as the playing time information of each sound event.
In a fourth possible implementation manner provided on the basis of the first to third possible implementation manners, the memory of the terminal further includes instructions for performing the following operations: the state of the behavior event is the current playing frequency of the behavior event;
determining playing time information of each sound event according to the state of the behavior event, wherein the playing time information comprises:
and determining the playing time information of each sound event according to the current playing frequency of the behavior event.
In a fifth possible implementation manner provided on the basis of the first to fourth possible implementation manners, the memory of the terminal further includes instructions for performing the following operations: before determining the playing time information of each sound event according to the current playing frequency of the behavior event, the method further comprises the following steps:
pre-storing the corresponding relation between the playing frequency of the behavior event and the playing time information of the sound event;
determining playing time information of each sound event according to the current playing frequency of the behavior event, wherein the playing time information comprises:
and searching the playing time information of each sound event corresponding to the current playing frequency of the behavior event in the corresponding relation between the playing frequency of the stored behavior event and the playing time of the sound event.
The terminal provided by the embodiment of the invention determines at least two sound events corresponding to the behavior event to be played currently, acquires the states of the behavior events, further determines the playing time information of each sound event corresponding to the behavior event according to the states of the behavior events, and plays each sound event according to the playing time of each sound event when the current behavior event is played. Because the number of the sound events corresponding to the behavior event to be played at present is at least two, the sound effect when the behavior event is played is enriched.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium may be a computer-readable storage medium contained in the memory in the foregoing embodiment; or it may be a separate computer-readable storage medium not incorporated in the terminal. The computer-readable storage medium stores one or more programs, the one or more programs being used by one or more processors to perform a method of playing a behavioral event, the method comprising:
determining at least two sound events corresponding to the behavior event to be played currently;
acquiring states of the behavior events, wherein the playing time information of each sound event corresponding to the behavior events in different states in the sound track is different, and the playing time information at least comprises a playing point and a playing duration;
and determining the playing time information of each sound event according to the state of the behavior event, and playing each sound event according to the playing time information of each sound event when the current behavior event is played.
Assuming that the above is the first possible implementation manner, in a second possible implementation manner provided on the basis of the first possible implementation manner, the memory of the terminal further includes instructions for performing the following operations: before determining at least two sound events corresponding to the behavior event to be played at present, the method further includes:
determining at least two sound events corresponding to each behavior event in preset sound events, and storing the corresponding relation between each behavior event and the sound event;
determining at least two sound events corresponding to the behavior event to be played currently, including:
and determining at least two sound events corresponding to the behavior event to be played currently according to the stored corresponding relation between each behavior event and the sound event.
In a third possible implementation manner provided on the basis of the first possible implementation manner or the second possible implementation manner, the memory of the terminal further includes instructions for performing the following operations: determining playing time information of each sound event according to the state of the behavior event, wherein the playing time information comprises:
and determining a playing point and a playing time length corresponding to each sound event in the time for playing the behavior event according to the state of the behavior event, and determining the playing point and the playing time length corresponding to each sound event in the time for playing the behavior event as the playing time information of each sound event.
In a fourth possible implementation manner provided on the basis of the first to third possible implementation manners, the memory of the terminal further includes instructions for performing the following operations: the state of the behavior event is the current playing frequency of the behavior event;
determining playing time information of each sound event according to the state of the behavior event, wherein the playing time information comprises:
and determining the playing time information of each sound event according to the current playing frequency of the behavior event.
In a fifth possible implementation manner provided on the basis of the first to fourth possible implementation manners, the memory of the terminal further includes instructions for performing the following operations: before determining the playing time information of each sound event according to the current playing frequency of the behavior event, the method further comprises the following steps:
pre-storing the corresponding relation between the playing frequency of the behavior event and the playing time information of the sound event;
determining playing time information of each sound event according to the current playing frequency of the behavior event, wherein the playing time information comprises:
and searching the playing time information of each sound event corresponding to the current playing frequency of the behavior event in the corresponding relation between the playing frequency of the stored behavior event and the playing time of the sound event.
The computer-readable storage medium provided in the embodiment of the present invention determines at least two sound events corresponding to a current behavior event to be played, and obtains a state of the behavior event, and further determines playing time information of each sound event corresponding to the behavior event according to the state of the behavior event, and plays each sound event according to the playing time of each sound event when the current behavior event is played. Because the number of the sound events corresponding to the behavior event to be played at present is at least two, the sound effect when the behavior event is played is enriched.
The embodiment of the invention provides a graphical user interface, which is used on a playing and displaying terminal of a behavior event, wherein the confirmation terminal for executing operation comprises a touch screen display, a memory and one or more processors for executing one or more programs; the graphical user interface includes:
determining at least two sound events corresponding to the behavior event to be played currently;
acquiring states of the behavior events, wherein the playing time information of each sound event corresponding to the behavior events in different states in the sound track is different, and the playing time information at least comprises a playing point and a playing duration;
and determining the playing time information of each sound event according to the state of the behavior event, and playing each sound event according to the playing time information of each sound event when the current behavior event is played.
The graphical user interface provided by the embodiment of the invention determines at least two sound events corresponding to the behavior event to be played currently, acquires the states of the behavior events, further determines the playing time information of each sound event corresponding to the behavior event according to the states of the behavior events, and plays each sound event according to the playing time of each sound event when the current behavior event is played. Because the number of the sound events corresponding to the behavior event to be played at present is at least two, the sound effect when the behavior event is played is enriched.
It should be noted that: in the playing apparatus for behavioral events provided in the foregoing embodiment, only the division of the functional modules is exemplified when playing a behavioral event, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the playing apparatus for behavioral events is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the embodiment of the playing apparatus for a behavior event and the embodiment of the playing method for a behavior event provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (4)

1. A method for playing behavior events, wherein the behavior events played by the method correspond to at least two sound events, and the method comprises:
determining at least two sound events corresponding to the behavior event to be played currently;
acquiring states of the behavior events, wherein playing time information of each sound event corresponding to the behavior events in different states in an audio track is different, the playing time information comprises a playing point and a playing duration, and the states of the behavior events are current playing frequencies of the behavior events;
pre-storing the corresponding relation between the playing frequency of the behavior event and the playing time information of the sound event;
in the corresponding relation between the playing frequency of the stored behavior event and the playing time of the sound event, determining the corresponding playing point and playing time of each sound event in the time for playing the behavior event according to the current playing frequency of the behavior event, determining the playing point and playing time corresponding to each sound event in the time for playing the behavior event as the playing time information of each sound event, and playing each sound event according to the playing time of each sound event when the time for playing the current behavior event reaches the playing point in the playing time information of each sound event, wherein the corresponding relation is determined according to the proportional relation between the playing frequency of the behavior event and the playing time information of the sound event.
2. The method according to claim 1, wherein before determining at least two sound events corresponding to the behavior event to be currently played, the method further comprises:
determining at least two sound events corresponding to each behavior event in preset sound events, and storing the corresponding relation between each behavior event and the sound event;
the determining at least two sound events corresponding to the behavior event to be played currently includes:
and determining at least two sound events corresponding to the behavior event to be played currently according to the stored corresponding relation between each behavior event and the sound event.
3. An apparatus for playing behavior events, wherein the behavior events played by the apparatus correspond to at least two sound events, the apparatus comprising:
the first determining module is used for determining at least two sound events corresponding to the behavior event to be played currently;
an obtaining module, configured to obtain states of the behavior event, where playing time information of each sound event in a sound track corresponding to the behavior event in different states is different, where the playing time information includes a playing point and a playing duration, and the state of the behavior event is a current playing frequency of the behavior event;
the second storage module is used for pre-storing the corresponding relation between the playing frequency of the behavior event and the playing time information of the sound event;
a second determining module, configured to determine, in a stored correspondence between a playing frequency of a behavior event and a playing time of a sound event, a playing point and a playing time duration corresponding to each sound event within a time for playing the behavior event according to a current playing frequency of the behavior event, and determine, as playing time information of each sound event, the playing point and the playing time duration corresponding to each sound event within the time for playing the behavior event, where the correspondence is determined according to a proportional relationship between the playing frequency of the behavior event and the playing time information of the sound event;
and the playing module is used for playing each sound event according to the playing duration of each sound event when the time for playing the current behavior event reaches the playing point in the playing time information of each sound event.
4. The apparatus of claim 3, further comprising:
the third determining module is used for determining at least two sound events corresponding to each behavior event in preset sound events;
the first storage module is used for storing the corresponding relation between each behavior event and each sound event;
the first determining module is configured to determine at least two sound events corresponding to the behavior event to be played currently according to the stored correspondence between each behavior event and the sound event.
CN201410229383.XA2014-05-282014-05-28Behavior event playing method and deviceActiveCN105159655B (en)

Priority Applications (4)

Application NumberPriority DateFiling DateTitle
CN201410229383.XACN105159655B (en)2014-05-282014-05-28Behavior event playing method and device
SG11201605960WASG11201605960WA (en)2014-05-282015-05-28Method and apparatus for playing behavior event
MYPI2016703177AMY196865A (en)2014-05-282015-05-28Method and apparatus for playing behavior event
PCT/CN2015/080100WO2015184959A2 (en)2014-05-282015-05-28Method and apparatus for playing behavior event

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201410229383.XACN105159655B (en)2014-05-282014-05-28Behavior event playing method and device

Publications (2)

Publication NumberPublication Date
CN105159655A CN105159655A (en)2015-12-16
CN105159655Btrue CN105159655B (en)2020-04-24

Family

ID=54767515

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201410229383.XAActiveCN105159655B (en)2014-05-282014-05-28Behavior event playing method and device

Country Status (4)

CountryLink
CN (1)CN105159655B (en)
MY (1)MY196865A (en)
SG (1)SG11201605960WA (en)
WO (1)WO2015184959A2 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109173259B (en)*2018-07-172022-01-21派视觉虚拟现实(深圳)软件技术有限公司Sound effect optimization method, device and equipment in game
CN109246580B (en)*2018-09-252022-02-11Oppo广东移动通信有限公司 3D sound effect processing method and related products
CN111135572A (en)*2019-12-242020-05-12北京像素软件科技股份有限公司Game sound effect management method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2007014444A (en)*2005-07-062007-01-25Konami Digital Entertainment:KkGame apparatus, and control method and program therefor
CN100511240C (en)*2007-12-282009-07-08腾讯科技(深圳)有限公司Audio document calling method and system
AU2012201105A1 (en)*2007-12-212012-03-15Aristocrat Technologies Australia Pty LimitedA gaming system, a sound controller, and a method of gaming
CN102542129A (en)*2010-12-082012-07-04杭州格诚网络科技有限公司Three-dimensional (3D) scene display system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
AU2008201210B2 (en)*2007-04-022010-05-27Aristocrat Technologies Australia Pty LtdGaming machine with sound effects
US20140091897A1 (en)*2012-04-102014-04-03Net Power And Light, Inc.Method and system for measuring emotional engagement in a computer-facilitated event

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2007014444A (en)*2005-07-062007-01-25Konami Digital Entertainment:KkGame apparatus, and control method and program therefor
AU2012201105A1 (en)*2007-12-212012-03-15Aristocrat Technologies Australia Pty LimitedA gaming system, a sound controller, and a method of gaming
CN100511240C (en)*2007-12-282009-07-08腾讯科技(深圳)有限公司Audio document calling method and system
CN102542129A (en)*2010-12-082012-07-04杭州格诚网络科技有限公司Three-dimensional (3D) scene display system

Also Published As

Publication numberPublication date
CN105159655A (en)2015-12-16
MY196865A (en)2023-05-05
WO2015184959A2 (en)2015-12-10
SG11201605960WA (en)2016-08-30
WO2015184959A3 (en)2016-01-28

Similar Documents

PublicationPublication DateTitle
KR101978590B1 (en) Message updating method, device and terminal
CN103488939B (en)Method, device and terminal for prompting user
CN104967896A (en)Method for displaying bulletscreen comment information, and apparatus thereof
CN106254910B (en)Method and device for recording image
CN103596017B (en)Video downloading method and system
JP2021005898A (en)Method for acquiring interactive information, terminal, server and system
CN106371964B (en)Method and device for prompting message
KR101813437B1 (en)Method and system for collecting statistics on streaming media data, and related apparatus
CN106231433B (en)A kind of methods, devices and systems playing network video
CN106294168B (en)A kind of method and system carrying out Application testing
CN106210755A (en)A kind of methods, devices and systems playing live video
CN105959482B (en) A kind of scene sound effect control method, and electronic device
CN106126675A (en)A kind of method of recommendation of audio, Apparatus and system
CN106101764A (en)A kind of methods, devices and systems showing video data
CN103581762A (en)Method, device and terminal equipment for playing network videos
WO2014194759A1 (en)A method, apparatus, and system for controlling voice data transmission
CN107817988A (en) Push message management method and related products
CN106210919A (en)A kind of main broadcaster of broadcasting sings the methods, devices and systems of video
CN104660769B (en)A kind of methods, devices and systems for adding associated person information
CN106682189B (en)File name display method and device
CN105159655B (en)Behavior event playing method and device
US20160119695A1 (en)Method, apparatus, and system for sending and playing multimedia information
CN107193551B (en)Method and device for generating image frame
CN106228994B (en)A kind of method and apparatus detecting sound quality
CN105577712A (en)File uploading method, file uploading device, and file uploading system

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp