Movatterモバイル変換


[0]ホーム

URL:


CN115086742A - Audio and video generation method and device - Google Patents

Audio and video generation method and device
Download PDF

Info

Publication number
CN115086742A
CN115086742ACN202210662172.XACN202210662172ACN115086742ACN 115086742 ACN115086742 ACN 115086742ACN 202210662172 ACN202210662172 ACN 202210662172ACN 115086742 ACN115086742 ACN 115086742A
Authority
CN
China
Prior art keywords
audio
video data
video
display
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210662172.XA
Other languages
Chinese (zh)
Other versions
CN115086742B (en
Inventor
龚云荷
郑雪
余文梦
王子乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co LtdfiledCriticalBeijing Dajia Internet Information Technology Co Ltd
Priority to CN202210662172.XApriorityCriticalpatent/CN115086742B/en
Publication of CN115086742ApublicationCriticalpatent/CN115086742A/en
Application grantedgrantedCritical
Publication of CN115086742BpublicationCriticalpatent/CN115086742B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The present application relates to the field of internet technologies, and in particular, to an audio and video generation method and apparatus. The audio and video generation method comprises the following steps: if a downloading instruction for the audio and video data is acquired, displaying a first text information set; if a first selection instruction aiming at the first text information set is obtained, a second text information set corresponding to the first selection instruction is obtained; the second set of textual information is a subset of the first set of textual information; and generating target audio and video data based on the audio and video data and the second text information set. By the method and the device, comprehensiveness of audio and video generation can be improved.

Description

Audio and video generation method and device
Technical Field
The present application relates to the field of internet technologies, and in particular, to an audio and video generation method and apparatus.
Background
With the development of science and technology, terminals have become essential tools in people's daily life. The user can watch the video online or download the video for watching through the terminal, which brings great convenience to the user. However, when a user shares or saves a video by downloading on a video detail page, the user can only download the video itself, and information such as a video file and a video comment is discarded in the saving process, so that the generated audio and video has no information such as a video file and a video comment, and the generated audio and video content is not complete, thereby affecting the use experience of the user.
Disclosure of Invention
The application provides an audio and video generation method and device, which are used for at least solving the problem that the generated audio and video content is not complete in the related technology. The technical scheme of the application is as follows:
according to a first aspect of the embodiments of the present application, there is provided an audio and video generation method, including:
if a downloading instruction for the audio and video data is acquired, displaying a first text information set;
if a first selection instruction aiming at the first text information set is obtained, a second text information set corresponding to the first selection instruction is obtained; the second set of textual information is a subset of the first set of textual information;
and generating target audio and video data based on the audio and video data and the second text information set.
Optionally, the method further includes:
acquiring a target display mode corresponding to the target audio and video data;
and if a display instruction for the target audio and video data is acquired, displaying the target audio and video data in the target display mode.
Optionally, the obtaining of the target display mode corresponding to the target audio/video data includes:
displaying a display mode set corresponding to the target audio and video data;
and acquiring a second selection instruction aiming at the display mode set, and acquiring a target display mode corresponding to the second selection instruction.
Optionally, the displaying the target audio and video data in the target display manner includes:
if the target display mode is a bullet screen stream display mode, acquiring a display interface corresponding to the target audio and video data;
acquiring display information corresponding to at least one text message corresponding to the second text message set;
and displaying at least one piece of text information corresponding to the audio and video data and the second text information set on the display interface based on the display information.
Optionally, the displaying the target audio and video data in the target display manner includes:
if the target display mode is a navigation bar display mode, acquiring an audio and video display interface corresponding to the target audio and video data;
performing interface size reduction processing on the audio and video display interface corresponding to the target audio and video data to obtain a processed audio and video display interface and a blank display interface;
acquiring display information corresponding to at least one text message corresponding to the second text message set;
and displaying the audio and video data on the processed display interface based on the display information, and displaying at least one text message corresponding to the second text message set on the blank display interface.
Optionally, the interface size reduction processing is performed on the audio/video display interface corresponding to the target audio/video data to obtain a processed audio/video display interface and a blank display interface, and the processing includes:
based on preset scaling size information, scaling the audio and video display interface corresponding to the target audio and video data to obtain a processed audio and video display interface and a blank display interface;
or,
based on preset cutting size information, cutting the audio and video display interface corresponding to the target audio and video data to obtain a processed audio and video display interface and a blank display interface;
or,
and responding to an interface size reduction processing instruction aiming at the target audio and video display interface, and performing interface size reduction processing on the audio and video display interface corresponding to the target audio and video data to obtain a processed audio and video display interface and the blank display interface.
Optionally, the obtaining of the presentation information corresponding to at least one text message corresponding to the second text message set includes:
acquiring the playing time corresponding to the target audio and video data;
acquiring the quantity of text information corresponding to at least one text information corresponding to the second text information set and the corresponding display times of the second text information set;
and determining the display rate information corresponding to at least one text message corresponding to the second text message set based on the playing duration, the text message quantity and the display times.
Optionally, the obtaining of the display information corresponding to at least one text message corresponding to the second text message set includes:
and acquiring display information input aiming at least one piece of text information corresponding to the second text information set, wherein the display information comprises at least one of text color information and text font information.
Optionally, the displaying the target audio and video data in the target display manner includes:
acquiring at least one text message corresponding to the second text message set and praise number information and attribute information corresponding to the at least one text message;
and displaying at least one text message corresponding to the audio and video data and the second text message set based on the target display mode, the praise amount information and the attribute information.
Optionally, if a download instruction for the audio/video data is obtained, displaying the first text information set includes:
if a downloading instruction for audio and video data is acquired, acquiring documentary information and a comment information set corresponding to the audio and video data, wherein the comment information set comprises at least one piece of first comment information;
adding the file information as second comment information to the comment information set;
and displaying the comment information set.
Optionally, the displaying the comment information set includes:
and displaying the comment information set according to the display sequence of the second comment information and the at least one piece of first comment information.
According to a second aspect of the embodiments of the present application, there is provided an audio/video generating apparatus, including:
the set acquisition unit is configured to display a first text information set if a downloading instruction for the audio and video data is acquired;
the instruction acquisition unit is configured to execute the step of acquiring a second text information set corresponding to a first selection instruction if the first selection instruction for the first text information set is acquired; the second set of textual information is a subset of the first set of textual information;
and the data generation unit is configured to generate target audio and video data based on the audio and video data and the second text information set.
Optionally, the apparatus further comprises a mode acquiring unit,
the mode acquisition unit is configured to execute acquisition of a target display mode corresponding to the target audio and video data;
the data generation unit is configured to execute the display of the target audio and video data in the target display mode if the display instruction for the target audio and video data is acquired.
Optionally, when the mode acquiring unit is configured to execute acquiring the target display mode corresponding to the target audio/video data, the mode acquiring unit is specifically configured to execute:
displaying a display mode set corresponding to the target audio and video data;
and acquiring a second selection instruction aiming at the display mode set, and acquiring a target display mode corresponding to the second selection instruction.
Optionally, the displaying the target audio and video data in the target display manner includes:
if the target display mode is a bullet screen stream display mode, acquiring a display interface corresponding to the target audio and video data;
acquiring display information corresponding to at least one text message corresponding to the second text message set;
and displaying at least one piece of text information corresponding to the audio and video data and the second text information set on the display interface based on the display information.
Optionally, the data generating unit, when being configured to perform displaying the target audio and video data in the target displaying manner, is specifically configured to perform:
if the target display mode is a navigation bar display mode, acquiring an audio and video display interface corresponding to the target audio and video data;
performing interface size reduction processing on the audio and video display interface corresponding to the target audio and video data to obtain a processed audio and video display interface and a blank display interface;
acquiring display information corresponding to at least one text message corresponding to the second text message set;
and displaying the audio and video data on the processed display interface based on the display information, and displaying at least one text message corresponding to the second text message set on the blank display interface.
Optionally, the data generating unit is configured to perform interface size reduction processing on the audio/video display interface corresponding to the target audio/video data, and when a processed audio/video display interface and a blank display interface are obtained, the data generating unit is specifically configured to perform:
based on preset scaling size information, scaling the audio and video display interface corresponding to the target audio and video data to obtain a processed audio and video display interface and a blank display interface;
or,
based on preset cutting size information, cutting the audio and video display interface corresponding to the target audio and video data to obtain a processed audio and video display interface and a blank display interface;
or,
and responding to an interface size reduction processing instruction aiming at the target audio and video display interface, and performing interface size reduction processing on the audio and video display interface corresponding to the target audio and video data to obtain a processed audio and video display interface and the blank display interface.
Optionally, the presentation information includes presentation rate information, and the data generating unit is configured to, when performing acquisition of the presentation information corresponding to at least one text message corresponding to the second text message set, specifically configured to perform:
acquiring the playing time corresponding to the target audio and video data;
acquiring the quantity of text information corresponding to at least one text information corresponding to the second text information set and the corresponding display times of the second text information set;
and determining the display rate information corresponding to at least one text message corresponding to the second text message set based on the playing duration, the text message quantity and the display times.
Optionally, the data generating unit is configured to, when performing to acquire the presentation information corresponding to at least one text message corresponding to the second text message set, specifically configured to perform:
and acquiring display information input aiming at least one piece of text information corresponding to the second text information set, wherein the display information comprises at least one of text color information and text font information.
Optionally, the data generating unit, when being configured to perform displaying the target audio and video data in the target displaying manner, is specifically configured to perform:
acquiring at least one text message corresponding to the second text message set and praise number information and attribute information corresponding to the at least one text message;
and displaying at least one text message corresponding to the audio and video data and the second text message set based on the target display mode, the praise amount information and the attribute information.
Optionally, the set acquiring unit includes an information acquiring subunit, a set adding subunit, and a set displaying subunit, where the set acquiring unit is configured to, when acquiring the download instruction for the audio and video data, display the first text information set:
the information acquisition subunit is configured to execute, if a downloading instruction for the audio and video data is acquired, acquiring documentary information and a comment information set corresponding to the audio and video data, wherein the comment information set comprises at least one piece of first comment information;
the set adding subunit is configured to perform adding the paperwork information as second comment information to the comment information set;
the set presentation subunit is configured to perform presentation of the comment information set.
Optionally, the set presentation subunit is configured to, when performing presentation of the comment information set, specifically configured to perform:
and displaying the comment information set according to the display sequence of the second comment information and the at least one piece of first comment information.
According to a third aspect of embodiments of the present application, there is provided a terminal, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the preceding aspects.
According to a fourth aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions configured to perform a method of causing the computer to perform any of the preceding aspects.
According to a fifth aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of any of the preceding aspects.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
in some or related embodiments, if a downloading instruction for audio and video data is acquired, a first text information set is displayed; if a first selection instruction aiming at the first text information set is obtained, a second text information set corresponding to the first selection instruction is obtained; the second text information set is a subset of the first text information set; and generating target audio and video data based on the audio and video data and the second text information set. Therefore, the audio and video data with the text information set can be generated in the audio and video downloading process, the generation quality of the audio and video can be improved, the comprehensiveness of the target audio and video data is improved, and the use experience of a user is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, are configured to carry out the principles of the application and are not to be construed as limiting the application.
Fig. 1 is a background schematic diagram illustrating a method of audio-video generation in accordance with an exemplary embodiment;
fig. 2 is an architectural diagram illustrating a method of audio-video generation in accordance with an exemplary embodiment;
fig. 3 is a flow diagram illustrating a method of audio-video generation in accordance with an exemplary embodiment;
fig. 4 is a flow diagram illustrating a method of audio-video generation in accordance with an exemplary embodiment;
FIG. 5 is a presentation diagram illustrating a terminal presentation style set according to an example embodiment;
FIG. 6a is a display diagram illustrating a terminal displaying a bottom navigation bar in accordance with an illustrative embodiment;
FIG. 6b is a display diagram illustrating a terminal displaying an upper navigation bar in accordance with an illustrative embodiment;
FIG. 6c is a display diagram illustrating a terminal displaying a left navigation bar according to an example embodiment;
FIG. 6d is a display diagram illustrating a terminal displaying a right navigation bar according to an example embodiment;
fig. 7a is a schematic presentation diagram illustrating a terminal displaying an audio and video data interface in a full screen according to an exemplary embodiment;
fig. 7b is a schematic illustration of a terminal half-screen presentation audio/video data interface according to an exemplary embodiment;
fig. 8 is a flow diagram illustrating an audio-visual generation method in accordance with an exemplary embodiment;
FIG. 9 is a display diagram illustrating a terminal display download interface in accordance with an exemplary embodiment;
fig. 10 is a block diagram illustrating an audio-video generating device in accordance with an exemplary embodiment;
fig. 11 is a block diagram illustrating an audio-visual generating device according to an exemplary embodiment;
fig. 12 is a block diagram illustrating an audio-visual generating device according to an exemplary embodiment;
fig. 13 is a block diagram illustrating a terminal according to an example embodiment.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are configured to perform similar objects and are not necessarily configured to perform the description in a specific order or sequence. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
With the development of scientific technology, terminal technology is mature day by day, and convenience of production and life of users is improved. In a terminal application scene, when a user needs to share or store a video, the user can download the video through a video application program.
According to some embodiments, fig. 1 is a background schematic diagram illustrating a method of audio-video generation according to an exemplary embodiment. As shown in fig. 1, a user may click a video application program of a terminal, and when the terminal detects that the user clicks the video application program, the terminal may display an audio/video list interface. When the terminal detects that a user clicks any key corresponding to the audio and video in the audio and video list interface, the terminal can display the audio and video detail interface corresponding to the audio and video. The user can click a download button of the audio/video detail interface. When the terminal detects that the user clicks the download key, the terminal can download the audio and video.
In some embodiments, when a user downloads the audio and video through the terminal on the audio and video detail interface, the user can only download the audio and video, and the information such as the audio and video file, the hot evaluation of the audio and video and the like can be discarded in the storage process. And the information of the file, the hot comment and the like is an important component of the audio and video content for part of the audio and video, so that the condition that the audio and video content is not complete and the user experience is poor can occur when the information of the file, the hot comment and the like cannot be downloaded simultaneously in the audio and video downloading process.
According to some embodiments, fig. 2 is an architectural diagram illustrating a method of audio-video generation according to an exemplary embodiment. As shown in fig. 2, the terminal 110 may upload the name of the audio/video selected by the user to theserver 130 through thenetwork 120. When theserver 130 receives the name of the audio/video selected by the user for downloading, theserver 130 may transmit the audio/video data of the audio/video to the terminal 110 through thenetwork 120. When the terminal 110 receives the audio/video data of the audio/video, the user can play the audio/video through the terminal.
It is readily understood that the terminal includes, but is not limited to: wearable devices, handheld devices, personal computers, tablet computers, in-vehicle devices, smart phones, computing devices or other processing devices connected to a wireless modem, and the like. The terminal devices in different networks may be called different names, for example: user equipment, access terminal, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, user terminal, wireless communication device, user agent or user equipment, cellular telephone, cordless telephone, Personal Digital Assistant (PDA), terminal equipment in a 5th generation mobile network or future evolution network, and the like. The terminal can be installed with an operating system, which is an operating system capable of running in the terminal, is a program for managing and controlling terminal hardware and terminal applications, and is an indispensable system application in the terminal. The operating system includes, but is not limited to, Android, IOS, Windows Phone (WP), and Ubuntu mobile operating system.
According to some embodiments, the terminal 110 may be connected to theserver 130 through thenetwork 120.Network 120 is used to provide a communication link betweenterminal 110 andserver 130.Network 120 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. It should be understood that the number ofterminals 110,networks 120, andservers 130 in fig. 2 is merely illustrative. There may be any number of terminals, networks and servers, as desired for the reality. For example, theserver 130 may be a server cluster composed of a plurality of servers, and the like. A user may use terminal 110 to interact withserver 130 overnetwork 120 for audio-video generation, etc.
Fig. 3 is a flowchart illustrating an audio/video generation method according to an exemplary embodiment, and as shown in fig. 3, the audio/video generation method may be configured to be executed in a multimedia scene, for example, and the method may be implemented by relying on a computer program and may be executed on a terminal including an audio/video generation function, and includes the following steps:
in step S11, if a download instruction for audio/video data is acquired, a first text information set is displayed;
according to some embodiments, the audio-video data refers to audio-video data downloaded by a user through a terminal. The audio-video data does not refer to a fixed audio-video data. For example, when the audio/video data selected by the user for downloading changes, the audio/video data also changes. When the content of the audio-video data changes, the audio-video data also changes.
According to some embodiments, the downloading instruction refers to an instruction sent when the user needs to download the audio and video. The download instruction does not refer to a fixed instruction. The download instructions include, but are not limited to, click download instructions, voice download instructions, and the like. When the terminal acquires the modification instruction aiming at the downloading instruction, the downloading instruction is changed correspondingly. For example, when the user clicks a "download" button in the audio/video details interface, the terminal may obtain a download instruction for the audio/video. Or, when the user says the downloading voice information on the audio/video detail interface, the terminal can also acquire a downloading instruction for the audio/video.
According to some embodiments, the text information refers to text information corresponding to audio and video data which can be downloaded when the terminal downloads the audio and video data. The text information does not refer to a fixed information. For example, when the audio-video data is transformed, the text information is changed accordingly. When the time of downloading changes, the text information changes correspondingly. The text information includes, but is not limited to, a case corresponding to the audio-video data, comment information corresponding to the audio-video data, and the like.
It is easy to understand that the first text information set refers to a set formed by gathering at least one text information. The first set of text information does not refer to a fixed set of text information. For example, when the audio-video data changes, the first text information set corresponding to the audio-video data can also change correspondingly. For example, when comment information corresponding to the audio-video data changes, the first text information set corresponding to the audio-video data may also change correspondingly. The text information in the first text information set includes, but is not limited to, a file corresponding to the audio and video, comment information corresponding to the audio and video, and the like.
According to some embodiments, if the terminal acquires a download instruction for the audio and video data, the terminal may display the first text information set.
In step S12, if a first selection instruction for the first text information set is obtained, a second text information set corresponding to the first selection instruction is obtained;
according to some embodiments, the second set of text information is a subset of the first set of text information, that is, the second set of text information may be the same as the first set of text information and may also include only a portion of the text information in the first set of text information.
It is readily understood that the first selection instruction is a selection instruction for a first set of text information, wherein a first of the first selection instructions is merely used to distinguish between other selection instructions. The first selection instruction does not refer to a fixed selection instruction. The first selection instruction includes, but is not limited to, a click selection instruction, a preset selection instruction, a voice selection instruction, and the like. The preset selection instruction may also determine, for example, the first 10 comments corresponding to the audio/video data and the pattern corresponding to the audio/video data as the second text information set.
According to some embodiments, if a downloading instruction for the audio and video data is acquired, the terminal may display the first text information set. If the first selection instruction for the first text information set is obtained, the terminal may obtain a second text information set corresponding to the first selection instruction.
In step S13, target audio-video data is generated based on the audio-video data and the second set of textual information.
It is easy to understand that the target audio and video data refers to audio and video data generated by the terminal in response to the downloading instruction and the first selection instruction and based on the acquired audio and video data and the second text information set. The target audio-visual data comprises audio-visual data and a second set of textual information.
In some embodiments, when the terminal acquires the audio and video data and the second text set, the terminal may generate the target audio and video data based on the audio and video data and the second text information set.
In summary, according to the method provided by the embodiment of the application, if a downloading instruction for audio and video data is obtained, a first text information set is displayed; if a first selection instruction aiming at the first text information set is obtained, a second text information set corresponding to the first selection instruction is obtained; the second text information set is a subset of the first text information set; and generating target audio and video data based on the audio and video data and the second text information set. Therefore, the audio and video data with the text information set can be generated in the audio and video downloading process, the generation quality of the audio and video can be improved, the comprehensiveness of the target audio and video data is improved, and the use experience of a user is further improved.
Fig. 4 is a flow diagram illustrating a method of audio-video generation in accordance with an exemplary embodiment. As shown in fig. 4, the audio/video generation method includes the following steps:
in step S21, if a download instruction for the audio/video data is acquired, a first text information set is displayed;
the specific process is as described above, and is not described herein again.
In step S22, if a first selection instruction for the first text information set is obtained, a second text information set corresponding to the first selection instruction is obtained;
the specific process is as described above, and is not described herein again.
According to some embodiments, when the first selection instruction for the first text information set is acquired, the terminal may acquire a second text information set corresponding to the first selection instruction. For example, the first text information section includes the pattern information and at least one piece of comment information corresponding to the audio-video data. The first selection instruction may be, for example, an instruction to select 10 th of praise number in the file information and the at least one piece of comment information, and the second text information set acquired by the terminal may be, for example, comment information 10 th of praise number in the file information and the at least one piece of comment information.
In step S23, generating target audio-video data based on the audio-video data and the second text information set;
the specific process is as described above, and is not described herein again.
In step S24, a target display mode corresponding to the target audio/video data is acquired;
according to some embodiments, the target presentation mode refers to a presentation mode corresponding to the target audio-video data. The object display mode is not specific to a fixed display mode. For example, when the target presentation mode is confirmed based on the second selection command, if the second selection command is changed, the target presentation mode may be changed accordingly.
According to some embodiments, the terminal acquires a target display mode corresponding to the target audio/video data and can acquire a display mode set corresponding to the target audio/video data; and acquiring a second selection instruction aiming at the display mode set, and acquiring a target display mode corresponding to the second selection instruction, so that the matching between the display mode and the target audio and video data can be improved, and the display effect of the audio and video data can be improved.
According to some embodiments, the presentation mode refers to a mode corresponding to a play mode of the target audio/video data. The display mode does not refer to a certain fixing mode. For example, when the presentation style is changed, the presentation style is also changed. When the terminal acquires the mode modification instruction for the display mode, the display mode is changed correspondingly.
In some embodiments, all the presentation manners are put into the same set, so that a presentation manner set can be obtained, that is, the presentation manner set is a set formed by aggregating at least one presentation manner. The presentation style set does not refer to a fixed set. For example, when the number of presentation methods changes, the presentation method information set also changes. When the showing mode is changed, the showing mode set is also changed. The display modes in the display mode set corresponding to the target audio and video data displayed by the terminal include, but are not limited to, a bullet stream display mode, a navigation bar display mode and the like.
In some embodiments, the terminal may generate a floating window on the display interface, and display a display mode set corresponding to the audio and video data in the floating window. The terminal can also display a display mode set corresponding to the audio and video data on a sub-display interface of the display interface.
In some embodiments, floating a window refers to a window that can float over a single page or multiple pages. The floating window is not specific to a fixed floating window. For example, when the audio-video data changes, the content of the floating window may also change. When the display mode is changed, the content of the floating window can be changed. For example, a user may move the floating window and adjust the size of the floating window.
In some embodiments, when the sub-display interface cannot display all the display mode sets corresponding to the audio and video data, the terminal may set a scroll operation bar, and display all the display mode sets corresponding to the audio and video data through the scroll operation bar. And displaying all display modes corresponding to the audio and video data by adjusting the size of the sub-display interface.
According to some embodiments, a presentation mode set corresponding to the terminal presenting the target audio/video data may be as shown in fig. 5, for example. The second selection instruction for the presentation mode set refers to an instruction selected in the presentation mode set for presenting the target audio-video data. The second selection instruction does not refer to a fixed instruction. The second presentation instruction includes, but is not limited to, a click selection instruction, a voice selection instruction, and the like. When the terminal acquires the modification instruction aiming at the second selection instruction, the selection instruction is changed correspondingly. For example, when the user clicks a control corresponding to any one of the presentation modes in the presentation mode set, the terminal may obtain a second selection instruction for the presentation mode. Or, when the user speaks the voice information corresponding to any one of the presentation modes in the presentation mode set, the terminal may also obtain a second selection instruction for the presentation mode.
According to some embodiments, when the terminal generates the target audio and video data, the terminal can acquire a target display mode corresponding to the target audio and video data.
In step S25, if the display instruction for the target audio/video data is acquired, the target audio/video data is displayed in a target display manner.
According to some embodiments, the presentation instruction refers to an instruction received by the terminal to present the generated target audio and video data. The show instruction does not refer to a fixed instruction. The presentation instruction includes, but is not limited to, a click presentation instruction, a voice presentation instruction, and the like. When the terminal acquires the modification instruction aiming at the display instruction, the display instruction is changed correspondingly. For example, when a user clicks a "display mode" control in a video detail interface corresponding to target audio and video data, the terminal may obtain a display instruction for the display mode of the target audio and video data. Or, when the user speaks the voice information of "displaying the target audio/video data", the terminal may also obtain a display instruction for the playing mode of the target audio/video data.
According to some embodiments, if the display instruction for the target audio and video data is acquired, the terminal can display the target audio and video data in a target display mode.
According to some embodiments, the terminal displays the target audio and video data in a target display mode, wherein if the target display mode is a bullet stream display mode, a display interface corresponding to the target audio and video data is obtained; acquiring display information corresponding to at least one text message corresponding to the second text message set; and displaying at least one text message corresponding to the audio and video data and the second text message set on the display interface based on the display information. The terminal can display the target audio and video data in a bullet screen stream display mode, and convenience and display effect of display of the target audio and video data can be improved.
According to some embodiments, the bullet screen stream display mode refers to that when the terminal plays audio and video, the text information in the second text information set is displayed in a video picture in a bullet screen mode. The bullet stream display mode information does not refer to a fixed information. For example, when the audio and video are changed, the bullet stream display mode information is also changed correspondingly. When the second text information set changes, the bullet stream display mode changes correspondingly.
In some embodiments, the display interface refers to a display interface corresponding to the target audio/video data when the target display mode is the bullet stream display mode. The presentation interface is not specific to a fixed presentation interface. For example, when the interface size of the display interface preset by the terminal changes, the display interface can also change correspondingly.
According to some embodiments, the display information corresponding to at least one text message refers to information corresponding to a mode of displaying the text message when the terminal plays the audio/video. The presentation information does not refer to a fixed information. The presentation information includes, but is not limited to, presentation rate information, text color information, text font information, and the like.
It is easy to understand that when the terminal determines that the target display mode is the bullet stream display mode, a display interface corresponding to the target audio and video data is obtained. When the terminal acquires the display interface, the terminal may acquire display information corresponding to at least one text message corresponding to the second text message set. Based on the display information, the terminal can display the audio and video data and at least one piece of text information corresponding to the second text information set on the display interface. The terminal presenting the second set of text information includes, but is not limited to, a full screen presentation, a half screen presentation, and the like.
According to some embodiments, the terminal displays the target audio and video data in a target display mode, wherein if the target display mode is a navigation bar display mode, an audio and video display interface corresponding to the target audio and video data is obtained; performing interface size reduction processing on an audio and video display interface corresponding to the target audio and video data to obtain a processed audio and video display interface and a blank display interface; acquiring display information corresponding to at least one text message corresponding to the second text message set; and displaying the audio and video data on the processed display interface based on the display information, and displaying at least one text message corresponding to the second text message set on the blank display interface. The terminal can display the target audio and video data in a navigation bar display mode, and convenience and display effect of display of the target audio and video data can be improved.
According to some embodiments, the navigation bar display mode refers to that when the terminal displays the target audio and video data, the text information in the second text information set is displayed in the audio and video picture in the form of the navigation bar. The navigation bar display mode information does not refer to a fixed information. For example, when the video and audio are changed, the display mode of the navigation bar is changed correspondingly. When the second text information set is transformed, the display mode information of the navigation bar is correspondingly changed.
According to some embodiments, the terminal performs interface size reduction processing on an audio and video display interface corresponding to target audio and video data to obtain a processed audio and video display interface and a blank display interface, and performs scaling processing on the audio and video display interface corresponding to the target audio and video data based on preset scaling size information to obtain the processed audio and video display interface and the blank display interface;
or,
based on preset cutting size information, cutting an audio and video display interface corresponding to target audio and video data to obtain a processed audio and video display interface and a blank display interface;
or,
and responding to an interface size reduction processing instruction aiming at the target audio and video display interface, and performing interface size reduction processing on the audio and video display interface corresponding to the target audio and video data to obtain a processed audio and video display interface and a blank display interface.
According to some embodiments, when the terminal performs interface size reduction processing on the audio/video display interface corresponding to the target audio/video data, the interface size reduction processing mode includes, but is not limited to, preset scaling size adjustment, preset clipping size adjustment, instruction size adjustment and the like, so that the use experience of a user can be improved.
In some embodiments, when the terminal performs interface size reduction processing in a preset scaling size adjustment manner, the terminal may perform scaling processing on an audio/video display interface corresponding to the audio/video data based on the preset scaling size information to obtain a processed audio/video display interface and a blank display interface.
In some embodiments, the preset zoom size information refers to size information used by the terminal to adjust the size of the interface. The predetermined zoom size information does not refer to a fixed information. When the terminal acquires the information modification instruction aiming at the preset scaling size information, the preset scaling size information is changed correspondingly.
In some embodiments, when the terminal performs the interface size reduction processing in the preset cutting size adjustment mode, the terminal may perform cutting processing on the audio/video display interface corresponding to the audio/video data based on the preset cutting size information to obtain a processed audio/video display interface and a blank display interface.
In some embodiments, the preset cropping size information refers to size information used by the terminal to adjust the interface size. The preset cropping size information does not refer to a fixed information. When the terminal acquires the information modification instruction aiming at the preset cutting size information, the preset cutting size information changes correspondingly.
In some embodiments, when the terminal performs the interface size reduction processing in the instruction size adjustment manner, the terminal may perform the interface size reduction processing on the audio/video display interface corresponding to the audio/video data in response to the interface size reduction processing instruction for the audio/video display interface, so as to obtain a processed audio/video display interface and a blank display interface.
In some embodiments, the manner in which the terminal responds to the interface size reduction processing for the audio/video display interface includes, but is not limited to, click reduction instruction, language reduction instruction, and the like. For example, when the user double-clicks the audio/video display interface, the terminal may obtain an interface size reduction processing instruction for the audio/video display interface. When the user sends the voice information of the interface to be turned down, the terminal can also obtain an interface size reduction processing instruction for the audio and video display interface.
It is easy to understand that when the terminal acquires the audio/video display interface corresponding to the audio/video data, the terminal can perform interface size reduction processing on the audio/video display interface corresponding to the audio/video data to obtain a processed audio/video display interface and a blank display interface.
According to some embodiments, the terminal displays the target audio and video data on the processed display interface based on the display information, and displays at least one text message corresponding to the second text message set on the blank display interface. The display modes include, but are not limited to, a bottom navigation bar display as shown in fig. 6a, an upper navigation bar display as shown in fig. 6b, a left navigation bar display as shown in fig. 6c, a right navigation bar display as shown in fig. 6d, and the like.
According to some embodiments, the presentation information includes presentation rate information, and when the terminal acquires the presentation information corresponding to at least one text message corresponding to the second text message set, the presentation information may be: acquiring a playing time corresponding to target audio and video data; acquiring the quantity of text information corresponding to at least one text information corresponding to the second text information set and the corresponding display times of the second text information set; and determining the display rate information corresponding to at least one text message corresponding to the second text message set based on the playing duration, the number of the text messages and the display times. The terminal can determine the display rate information corresponding to at least one text message corresponding to the second text message set based on the playing time, the number of text messages and the display times, so that the use experience of the user can be improved.
According to some embodiments, when the terminal acquires the presentation information corresponding to the at least one text message corresponding to the second text message set, the method may be: and acquiring display information input aiming at least one text message corresponding to the second text message set, wherein the display information comprises at least one of text color information and text font information. And displaying at least one text message corresponding to the audio and video data and the first text message set based on the display mode information and the display information. Therefore, when the user plays the audio and video, the text color information and the text font information can be adjusted according to the picture color of the target audio and video data, and the use experience of the user is further improved.
According to some embodiments, the terminal displaying the target audio and video data in the target display manner may be that at least one text message corresponding to the second text message set and praise amount information and attribute information corresponding to the at least one text message are acquired; and displaying at least one text message corresponding to the audio and video data and the second text message set based on the target display mode, the praise quantity information and the attribute information, so that the matching between the at least one text message display and the audio and video data can be improved, and the target audio and video display effect can be improved.
In some embodiments, the attribute information refers to attribute information corresponding to the text information. The attribute information does not refer to a fixed information. For example, when text information changes, the attribute information also changes accordingly. When the terminal acquires the information modification instruction for the attribute information, the attribute information is changed correspondingly. The attribute information includes, but is not limited to, a criticality attribute, and the like.
For example, when the terminal displays the target audio/video data and at least one piece of text information corresponding to the second text information set, the terminal may display the target audio/video data in a full screen mode, and display the text information a, the approval number 6W of the text information a, the comment tag of the text information a, the text information B, the approval number 3W of the text information B, and the comment tag of the text information B on the upper portion of a display interface of the target audio/video data pair, at this time, an example schematic diagram of the terminal interface may be as shown in fig. 7 a. The terminal can also display the target audio and video data on a half screen, and display the text information a, the comment number 6W of the text information a, the comment tag of the text information B, the comment number 3W of the text information B and the comment tag of the text information B on the lower portion of a target audio and video data display interface, and at this time, an exemplary schematic diagram of the terminal interface can be as shown in fig. 7B.
In summary, according to the method provided by the embodiment of the application, if a downloading instruction for audio and video data is obtained, a first text information set is displayed; if a first selection instruction aiming at the first text information set is obtained, a second text information set corresponding to the first selection instruction is obtained; target audio and video data are generated based on the audio and video data and the second text information set, so that the audio and video data with the text information set can be generated in the audio and video downloading process, the generation quality of the audio and video can be improved, the comprehensiveness of the target audio and video data is improved, and the use experience of a user is further improved. Secondly, by obtaining a target display mode corresponding to the target audio and video data, if a display instruction for the target audio and video data is obtained, the target audio and video data is displayed in the target display mode, the target audio and video data can be displayed in the target display mode, the matching between the target display mode and the target audio and video data can be improved, and the display effect of the target audio and video data is improved.
Fig. 8 is a flowchart illustrating an audio-visual generation method according to an exemplary embodiment. As shown in fig. 8, the audio/video generation method includes the steps of:
in step S31, if a download instruction for the audio/video data is acquired, acquiring a file information and comment information set corresponding to the audio/video data;
the specific process is as described above, and is not described herein again.
According to some embodiments, the downloading instruction refers to an instruction sent when the user needs to download the audio and video. The download instruction does not refer to a fixed instruction. The download instructions include, but are not limited to, click download instructions, voice download instructions, and the like. When the terminal acquires the modification instruction aiming at the downloading instruction, the downloading instruction is changed correspondingly. For example, when the user clicks a "download" button in the audio/video detail interface, the terminal may acquire a download instruction for the audio/video. Or, when the user says 'download' voice information on the audio/video detail interface, the terminal can also obtain a download instruction for the audio/video.
According to some embodiments, when the terminal acquires a downloading instruction for the audio and video data, the terminal can acquire the file information and comment information set corresponding to the audio and video data.
In some embodiments, the literature information refers to information corresponding to the literature of the audio and video. The document information does not refer to a fixed information. For example, when the audio and video frequency changes, the file information also changes correspondingly. When the user who makes the audio and video changes, the file information will change correspondingly. The file information includes but is not limited to the theme, type, emphasis of expression, outline, etc. of the audio-video.
In some embodiments, a set of review information refers to a set of review information corresponding to an audio-video. The set of review information includes at least one first review information. The comment information set does not refer to a fixed set. For example, when the video and audio changes, the comment information set changes correspondingly. When the comment information is transformed, the comment information set is changed correspondingly. The first comment information refers to information input by a user for commenting on the audio and video.
According to some embodiments, if a downloading instruction for audio and video data is acquired, the terminal can acquire the file information and comment information set corresponding to the audio and video data.
In step S32, adding the case information as second comment information to the comment information set;
in some embodiments, the terminal may set comment information in the comment information set as first comment information, and add the case information as second comment information to the comment information set.
In some embodiments, when the terminal displays the comment information set, the terminal may display the comment information set in the display order of the second comment information and the at least one first comment information. Therefore, the file information can be automatically displayed at the position of the first comment, the selection steps of the text information are reduced, and the selection efficiency of the text information is improved.
It is easy to understand that when a user needs to download the audio and video, the user can send a download instruction for the audio and video. When the terminal acquires a downloading instruction for the audio and video data, the terminal can display a second text information set corresponding to the audio and video data.
In step S33, a comment information set is presented;
according to some embodiments, after the terminal adds the file information as the second comment information to the comment information set, the terminal may present the comment information set. An example schematic of the terminal interface at this time may be as shown in fig. 9, for example.
In step S34, if a first selection instruction for the first text information set is obtained, a second text information set corresponding to the first selection instruction is obtained;
the specific process is as described above, and is not described herein again.
In step S35, target audio-video data is generated based on the audio-video data and the second set of textual information.
The specific process is as described above, and is not described herein again.
In the embodiment of the application, if the download instruction for the audio and video data is acquired, the file information and the comment information set corresponding to the audio and video data are acquired, the file information is used as second comment information and added to the comment information set, and the comment information set is displayed, so that the file information can be displayed in a comment information format, text information selection steps can be reduced, and text information selection efficiency is improved.
In summary, according to the method provided by the embodiment of the application, if the download instruction for the audio and video data is obtained, the file information and the comment information set corresponding to the audio and video data are obtained, the file information is used as second comment information and added to the comment information set, and the comment information set is displayed, so that the file information can be displayed in the format of the comment information, the selection steps of text information can be reduced, and the selection efficiency of the text information is improved. Secondly, if a first selection instruction aiming at the first text information set is acquired, a second text information set corresponding to the first selection instruction is acquired, target audio and video data are generated based on the audio and video data and the second text information set, and the audio and video data with the text information set can be generated in the audio and video downloading process, so that the generation quality of audio and video generation can be improved, the comprehensiveness of the target audio and video data is improved, and the use experience of a user is further improved.
Fig. 10 shows a block diagram of an audio-video generation apparatus according to an example embodiment. Referring to fig. 10, the audio/video generating apparatus 1000 includes aset acquiring unit 1001, aninstruction acquiring unit 1002, and adata generating unit 1003.
Theset acquisition unit 1001 is configured to display a first text information set if a download instruction for audio and video data is acquired;
theinstruction acquisition unit 1002 is configured to execute, if a selection instruction for the first text information set is acquired, acquiring a second text information set corresponding to the selection instruction; the second text information set is a subset of the first text information set;
adata generating unit 1003 configured to perform generating target audio-video data based on the audio-video data and the second text information set.
According to some embodiments, fig. 11 is a block diagram illustrating an audio-video generating device according to an exemplary embodiment. Referring to fig. 11, theapparatus 1000 further includes apattern acquisition unit 1004,
themode acquiring unit 1004 is configured to execute acquiring a target display mode corresponding to target audio/video data;
thedata generating unit 1003 is configured to perform displaying of the target audio and video data in a target display mode if the display instruction for the target audio and video data is acquired.
According to some embodiments, themode obtaining unit 1004, when being configured to obtain the target presentation mode corresponding to the target audio/video data, is specifically configured to perform:
displaying a display mode set corresponding to the target audio and video data;
and acquiring a selection instruction for the display mode set, and acquiring a target display mode corresponding to the selection instruction.
According to some embodiments, themanner obtaining unit 1004, when being configured to perform displaying the target audio and video data in the target display manner, is specifically configured to perform:
if the target display mode is a bullet screen stream display mode, acquiring a display interface corresponding to the target audio and video data;
acquiring display information corresponding to at least one text message corresponding to the second text message set;
and displaying at least one text message corresponding to the audio and video data and the second text message set on the display interface based on the display information.
According to some embodiments, thedata generating unit 1003, when being configured to perform displaying of the target audio and video data in the target display manner, is specifically configured to perform:
if the target display mode is a navigation bar display mode, acquiring an audio and video display interface corresponding to the target audio and video data;
performing interface size reduction processing on an audio and video display interface corresponding to the target audio and video data to obtain a processed audio and video display interface and a blank display interface;
acquiring display information corresponding to at least one text message corresponding to the second text message set;
and displaying the audio and video data on the processed display interface based on the display information, and displaying at least one text message corresponding to the second text message set on the blank display interface.
According to some embodiments, thedata generating unit 1003 is configured to perform interface size reduction processing on an audio/video display interface corresponding to the target audio/video data, and when a processed audio/video display interface and a blank display interface are obtained, the data generating unit is specifically configured to perform:
based on preset scaling size information, scaling an audio and video display interface corresponding to the target audio and video data to obtain a processed audio and video display interface and a blank display interface;
or,
based on preset cutting size information, cutting an audio and video display interface corresponding to target audio and video data to obtain a processed audio and video display interface and a blank display interface;
or,
and responding to an interface size reduction processing instruction aiming at the target audio and video display interface, and performing interface size reduction processing on the audio and video display interface corresponding to the target audio and video data to obtain a processed audio and video display interface and a blank display interface.
According to some embodiments, the presentation information includes presentation rate information, and thedata generating unit 1003 is configured to, when performing acquiring the presentation information corresponding to at least one text information corresponding to the second text information set, specifically configured to perform:
acquiring a playing time corresponding to target audio and video data;
acquiring the quantity of text information corresponding to at least one text information corresponding to the second text information set and the corresponding display times of the second text information set;
and determining the display rate information corresponding to at least one text message corresponding to the second text message set based on the playing time, the text message quantity and the display times.
According to some embodiments, thedata generating unit 1003 is configured to, when performing obtaining the presentation information corresponding to at least one text message corresponding to the second text message set, specifically configured to perform:
and acquiring display information input aiming at least one text message corresponding to the second text message set, wherein the display information comprises at least one of text color information and text font information.
According to some embodiments, when thedata generating unit 1003 is configured to display the target audio/video data in the target display manner, the data generating unit is specifically configured to:
acquiring at least one text message corresponding to the second text message set and praise number information and attribute information corresponding to the at least one text message;
and displaying at least one text message corresponding to the audio and video data and the second text message set based on the target display mode, the praise quantity information and the attribute information.
According to some embodiments, fig. 12 is a block diagram illustrating an audio-video generating device according to an exemplary embodiment. Referring to fig. 12, theset acquiring unit 1001 includes an information acquiring sub-unit 1011, a set adding sub-unit 1021, and a set presenting sub-unit 1031, and theset acquiring unit 1001 is configured to, when acquiring a download instruction for audio/video data, present a first text information set:
theinformation obtaining subunit 1011 is configured to execute, if a download instruction for the audio and video data is obtained, obtaining the file information and the comment information set corresponding to the audio and video data, where the comment information set includes at least one piece of first comment information;
aset adding subunit 1021 configured to perform addition of the case information as second comment information to the comment information set;
aset presentation subunit 1031 configured to perform presentation of the comment information set.
According to some embodiments, theset presentation subunit 1031, when configured to perform presentation of the comment information set, is specifically configured to perform:
and displaying the comment information sets according to the display sequence of the second comment information and the at least one piece of first comment information.
To sum up, the device provided in the embodiment of the present application is configured to display the first text information set if the download instruction for the audio/video data is obtained by the set obtaining unit; the instruction acquisition unit is configured to execute the step of acquiring a second text information set corresponding to a selection instruction if the selection instruction for the first text information set is acquired; the second text information set is a subset of the first text information set; and the data generation unit is configured to execute generation of target audio and video data based on the audio and video data and the second text information set. In the process of downloading the audio and video, the audio and video data with the text information set can be generated, the generation quality of the audio and video can be improved, the comprehensiveness of the target audio and video data is improved, and the use experience of a user is further improved.
Referring to fig. 13, a block diagram of a terminal is shown in accordance with an example embodiment. As shown in fig. 13, terminal 1300 may include: at least oneprocessor 1301, at least onenetwork interface 1304, auser interface 1303,memory 1305, at least onecommunication bus 1302.
Wherein acommunication bus 1302 is used to enable connective communication between these components.
Theuser interface 1303 may include a speaker and a display screen, and theoptional user interface 1303 may further include a standard wired interface and a wireless interface.
Thenetwork interface 1304 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Processor 1301 may include one or more processing cores, among other things. Theprocessor 1301 connects various parts throughout the terminal 1300 using various interfaces and lines to perform various functions of the terminal 1300 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in thememory 1305 and invoking data stored in thememory 1305. Optionally, theprocessor 1301 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). Theprocessor 1301 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is to be understood that the modem may not be integrated into theprocessor 1301, but may be implemented by a single chip.
TheMemory 1305 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, thememory 1305 includes a non-transitory computer-readable medium. Thememory 1305 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. Thememory 1305 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. Thememory 1305 may optionally be at least one memory device located remotely from theprocessor 1301. As shown in fig. 13, thememory 1305, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an application program for audio-video generation.
In the terminal 1300 shown in fig. 13, theuser interface 1303 is mainly used for providing an input interface for a user to obtain data input by the user; andprocessor 1301 may be configured to invoke an application program presented with an image stored inmemory 1305 and perform the steps of the method embodiments of fig. 3-9 in particular.
Correspondingly, the embodiment of the application also provides a computer readable storage medium storing the computer program. The computer-readable storage medium stores a computer program, and the computer program, when executed by one or more processors, causes the one or more processors to perform the steps in the method embodiments of fig. 3-9.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present application and are presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

CN202210662172.XA2022-06-132022-06-13Audio and video generation method and deviceActiveCN115086742B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210662172.XACN115086742B (en)2022-06-132022-06-13Audio and video generation method and device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210662172.XACN115086742B (en)2022-06-132022-06-13Audio and video generation method and device

Publications (2)

Publication NumberPublication Date
CN115086742Atrue CN115086742A (en)2022-09-20
CN115086742B CN115086742B (en)2024-05-14

Family

ID=83251155

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210662172.XAActiveCN115086742B (en)2022-06-132022-06-13Audio and video generation method and device

Country Status (1)

CountryLink
CN (1)CN115086742B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105554582A (en)*2015-12-142016-05-04广州弹幕网络科技有限公司Comment display method and system
CN105959818A (en)*2016-07-012016-09-21上海幻电信息科技有限公司Bullet screen display method and display system
CN105979288A (en)*2016-06-172016-09-28乐视控股(北京)有限公司Video interception method and device
CN107147941A (en)*2017-05-272017-09-08努比亚技术有限公司Barrage display methods, device and the computer-readable recording medium of video playback
CN107613392A (en)*2017-09-222018-01-19广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN107786905A (en)*2017-10-232018-03-09咪咕动漫有限公司A kind of method, apparatus of video sharing
CN107820131A (en)*2017-10-302018-03-20优酷网络技术(北京)有限公司Share the method and device of comment information
CN108235103A (en)*2018-01-162018-06-29深圳市瑞致达科技有限公司Advertisement intelligent playback method, device, system and readable storage medium storing program for executing
CN108989870A (en)*2017-06-022018-12-11中国电信股份有限公司Control the method and system in barrage region
CN109391853A (en)*2018-11-302019-02-26努比亚技术有限公司Barrage display methods, device, mobile terminal and readable storage medium storing program for executing
CN109547833A (en)*2018-11-152019-03-29平安科技(深圳)有限公司Barrage display control method, device, equipment and computer readable storage medium
CN110198491A (en)*2019-05-282019-09-03北京奇艺世纪科技有限公司A kind of video sharing method and device
CN110933511A (en)*2019-11-292020-03-27维沃移动通信有限公司 A video sharing method, electronic device and medium
US20200312327A1 (en)*2019-03-292020-10-01Shanghai Bilibili Technology Co., Ltd.Method and system for processing comment information
CN112019908A (en)*2019-05-312020-12-01阿里巴巴集团控股有限公司Video playing method, device and equipment
CN112771881A (en)*2018-11-132021-05-07深圳市欢太科技有限公司Bullet screen processing method and device, electronic equipment and computer readable storage medium
CN113766336A (en)*2020-06-042021-12-07腾讯科技(深圳)有限公司Video playing control method and device

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105554582A (en)*2015-12-142016-05-04广州弹幕网络科技有限公司Comment display method and system
CN105979288A (en)*2016-06-172016-09-28乐视控股(北京)有限公司Video interception method and device
CN105959818A (en)*2016-07-012016-09-21上海幻电信息科技有限公司Bullet screen display method and display system
CN107147941A (en)*2017-05-272017-09-08努比亚技术有限公司Barrage display methods, device and the computer-readable recording medium of video playback
CN108989870A (en)*2017-06-022018-12-11中国电信股份有限公司Control the method and system in barrage region
CN107613392A (en)*2017-09-222018-01-19广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN107786905A (en)*2017-10-232018-03-09咪咕动漫有限公司A kind of method, apparatus of video sharing
CN107820131A (en)*2017-10-302018-03-20优酷网络技术(北京)有限公司Share the method and device of comment information
CN108235103A (en)*2018-01-162018-06-29深圳市瑞致达科技有限公司Advertisement intelligent playback method, device, system and readable storage medium storing program for executing
CN112771881A (en)*2018-11-132021-05-07深圳市欢太科技有限公司Bullet screen processing method and device, electronic equipment and computer readable storage medium
CN109547833A (en)*2018-11-152019-03-29平安科技(深圳)有限公司Barrage display control method, device, equipment and computer readable storage medium
CN109391853A (en)*2018-11-302019-02-26努比亚技术有限公司Barrage display methods, device, mobile terminal and readable storage medium storing program for executing
US20200312327A1 (en)*2019-03-292020-10-01Shanghai Bilibili Technology Co., Ltd.Method and system for processing comment information
CN110198491A (en)*2019-05-282019-09-03北京奇艺世纪科技有限公司A kind of video sharing method and device
CN112019908A (en)*2019-05-312020-12-01阿里巴巴集团控股有限公司Video playing method, device and equipment
CN110933511A (en)*2019-11-292020-03-27维沃移动通信有限公司 A video sharing method, electronic device and medium
CN113766336A (en)*2020-06-042021-12-07腾讯科技(深圳)有限公司Video playing control method and device

Also Published As

Publication numberPublication date
CN115086742B (en)2024-05-14

Similar Documents

PublicationPublication DateTitle
CN112738623B (en)Video file generation method, device, terminal and storage medium
US20240402977A1 (en)Desktop sharing method and mobile terminal
CN112040330B (en)Video file processing method and device, electronic equipment and computer storage medium
CN106790120A (en) Terminal equipment and video stream related information live control and interaction method
CN113110829B (en)Multi-UI component library data processing method and device
CN112965701A (en)Front-end code generation method and device
CN114201170A (en)Service page generation method and device
CN113095053A (en)Webpage table customizing method and device
CN114244896A (en)Message pushing method and device, electronic equipment and storage medium
CN113342330B (en)Front-end engineering generation method and device
CN110781349A (en)Method, equipment, client device and electronic equipment for generating short video
CN111679811B (en)Web service construction method and device
CN114091422A (en) A method, device, device and medium for generating a display page for an exhibition
JP2025521195A (en) Text material acquisition method, device, equipment, medium, and program product
CN114679621A (en)Video display method and device and terminal equipment
KR101510144B1 (en)System and method for advertisiing using background image
CN112394932A (en)Automatic browser webpage skin changing method and device
JP2016021051A (en)Advertisement system using connection signal of audio output device on digital device and method therefor
CN114756695A (en)Multimedia resource processing method, device, equipment and storage medium
CN115086742B (en)Audio and video generation method and device
JP7684446B2 (en) VIDEO GENERATION METHOD, APPARATUS, STORAGE MEDIUM AND PROGRAM PRODUCT
US20170279749A1 (en)Modular Communications
CN110688604A (en)System for editing courseware on line
CN111242688A (en)Animation resource manufacturing method and device, mobile terminal and storage medium
CN112995699B (en)Online live broadcast method, live broadcast equipment, live broadcast system and electronic equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp