Disclosure of Invention
The embodiment of the application aims to provide a method for controlling visual media, and the effect of controlling the visual media more efficiently can be achieved through the technical scheme of the embodiment of the application.
In a first aspect, an embodiment of the present application provides a method for controlling visual media, including obtaining a markup language file of a staff of music to be matched; extracting music information in the markup language file, wherein the music information comprises at least one of the following information: tone information, loudness information, frequency information, rhythm information, and tone information; the light or video is controlled based on the music information.
In the embodiment, the key music information in the file can be accurately and quickly extracted through the markup language file of the staff, and then the effect of controlling the visual media through the music information with higher efficiency can be achieved.
In some embodiments, controlling lights or videos based on music information includes:
inputting music information into a preset algorithm matching model to obtain control information, wherein the control information comprises: light control information or video control information, the light control information including at least one of the following information: color control information, intensity control information, frequency control information, rhythm control information, brightness control information, and on-off control information of the light, the video control information including at least one of the following information: motion control information, rhythm control information and animation frame control information of the video;
the light or video is controlled based on the control information.
In the embodiment of the application, the extracted music information can be matched with some information of videos or light through the algorithm matching model, and then the videos or the light can be controlled through music.
In some embodiments, before inputting the music information into the preset algorithm matching model to obtain the control information, the method further includes:
acquiring one or more music information samples for controlling light or videos, wherein the music information samples comprise one or more pieces of music information and standard light control information and standard video control information corresponding to the one or more pieces of music information;
and training the existing model through one or more music information samples to obtain an algorithm matching model.
In the above embodiments of the present application, the model is trained through one or more music information samples and corresponding standard lighting control information and standard video control information, and the model may have a function of matching music information with video or some information of lighting.
In some embodiments, obtaining a markup language file of a staff of music to be matched comprises:
editing the staff of the music to be matched through a staff editor, and exporting a markup language file;
or alternatively
The music to be matched is played through one or more instruments and a markup language file of the staff is derived.
In the above embodiment, the staff corresponding to the target music can be directly edited by the staff editor, the markup language file can be exported, the markup language file of the staff can be directly exported by the electronic musical instrument, and the markup language file can be obtained in multiple ways, so that the video and the light can be controlled.
In some embodiments, when the embodiment is applied to a cabin, the light is atmosphere light of the cabin, and the video is video played by a screen of the cabin;
when this embodiment is applied to indoorly, light is indoor atmosphere lamp light, and the video is the video of indoor intelligent TV or projecting apparatus broadcast.
In the above embodiments, the method may be applied to multiple scenes, and different visual media may be controlled by music in different scenes.
In a second aspect, an embodiment of the present application provides a method for generating music, including extracting information to be matched in a file to be matched, where the file to be matched includes a video to be matched or a script file of light to be matched, and the information to be matched includes playing information of the video to be matched or conversion information of the light to be matched; and acquiring corresponding music based on the information to be matched.
In the embodiment of the application, the generation of music can be controlled by extracting information in the script file of the video and the light.
In some embodiments, the conversion information includes at least one of: the system comprises light color information, light intensity information, light frequency information, light rhythm information, light brightness information and light switch information, wherein the playing information comprises at least one of the following information: video motion information, video cadence information, and video animation frame information.
In the above embodiments, the conversion information and the playing information include one or more pieces of information for matching with the music information, thereby generating music.
In some embodiments, obtaining corresponding music based on the information to be matched includes:
inputting information to be matched into a preset algorithm matching model to obtain music generation information, wherein the music generation information comprises at least one of the following information: tone generation information, loudness generation information, frequency generation information, rhythm generation information, and tone generation information;
and acquiring corresponding music based on the music generation information.
In the embodiment of the application, the playing information of the video and the conversion information of the light can be matched with the music information of the music through the algorithm matching model, and then the corresponding music is matched according to the obtained music information or the music is directly generated.
In some embodiments, when the embodiment is applied to a cabin, the light to be matched is atmosphere light of the cabin, and the video to be matched is video played on a screen of the cabin;
when this embodiment is applied to indoorly, treat to match light for indoor atmosphere lamp light, treat to match the video for the video of indoor intelligent TV or projecting apparatus broadcast.
In the above embodiments of the present application, the method may apply multiple scenes, and may obtain corresponding music through different visual media in different scenes.
In a third aspect, an embodiment of the present application provides an apparatus for controlling visual media, including:
the acquisition module is used for acquiring a markup language file of a staff of music to be matched;
the extracting module is used for extracting the music information in the markup language file, wherein the music information comprises at least one of the following information: tone information, loudness information, frequency information, rhythm information, and tone information;
and the control module is used for controlling the light or the video based on the music information.
Optionally, the control module is specifically configured to:
inputting music information into a preset algorithm matching model to obtain control information, wherein the control information comprises: light control information or video control information, the light control information including at least one of the following information: color control information, intensity control information, frequency control information, rhythm control information, brightness control information, and on-off control information of the light, the video control information including at least one of the following information: motion control information, rhythm control information and animation frame control information of the video;
the light or video is controlled based on the control information.
Optionally, the apparatus further comprises:
the training module is used for acquiring one or more music information samples for controlling light or videos before the control module inputs the music information into a preset algorithm matching model to obtain control information, wherein the music information samples comprise one or more pieces of music information and standard light control information and standard video control information corresponding to the one or more pieces of music information;
and training the existing model through one or more music information samples to obtain an algorithm matching model.
Optionally, the obtaining module is specifically configured to:
editing the staff of the music to be matched through a staff editor, and exporting a markup language file;
or
The music to be matched is played through one or more instruments and a markup language file of the staff is derived.
Optionally, when the device is applied to the cabin, the light is atmosphere light of the cabin, and the video is a video played by a screen of the cabin;
when the device is applied to indoors, light is indoor atmosphere lamp light, and the video is the video of indoor intelligent TV or projecting apparatus broadcast.
In a fourth aspect, an embodiment of the present application provides an apparatus for generating music, including:
the extraction module is used for extracting information to be matched in a file to be matched, wherein the file to be matched comprises a video to be matched or a script file of light to be matched, and the information to be matched is used for controlling the generation of music;
and the acquisition module is used for acquiring corresponding music based on the information to be matched.
Optionally, the conversion information includes at least one of the following information: the system comprises light color information, light intensity information, light frequency information, light rhythm information, light brightness information and light switch information, wherein the playing information comprises at least one of the following information: video motion information, video cadence information, and video animation frame information.
Optionally, the obtaining module is specifically configured to:
inputting information to be matched into a preset algorithm matching model to obtain music generation information, wherein the music generation information comprises at least one of the following information: tone generation information, loudness generation information, frequency generation information, rhythm generation information, and tone generation information;
and acquiring corresponding music based on the music generation information.
Optionally, when the device is applied to the cabin, the light to be matched is atmosphere light of the cabin, and the video to be matched is video played by a screen of the cabin;
when the device is applied to the indoor, the light to be matched is indoor atmosphere lamp light, and the video to be matched is the video played by an indoor intelligent television or a projector.
In a fifth aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the method of the first or second aspect is executed.
In a sixth aspect, embodiments of the present application provide a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps in the method as provided in the first or second aspect.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Some terms referred to in the embodiments of the present application will be described first to facilitate understanding by those skilled in the art.
APP: application mobile phone software mainly refers to software installed on a smart phone, and overcomes the defects and individuation of an original system.
Notation Pad: a staff music editor.
XML: extensible Markup Language (XML), a subset of standard universal Markup Language, can be used to tag data, define data types, and is a source Language that allows a user to define his or her own Markup Language.
MusicXML: extensible markup language bearing music.
PCM: pulse Code Modulation (PCM), binary optical Pulse "0" Code and "1" Code, which are generated by on-off Modulation of a light source by a binary digital signal. And the digital signal is generated by sampling, quantizing and encoding a continuously varying analog signal.
The method and the device are applied to controlling the scenes of the visual media, and the specific scenes are controlling the visual media through some information of music.
However, in the current mode of controlling the light by the audio frequency, resampling is mainly performed by the existing pulse coding audio frequency, then the audio frequency intensity and frequency information are extracted by fourier transform, rhythm points are obtained by short-time energy comparison, and the obtained rhythm points are converted into light by an acousto-optic matching model to control the intensity, the color and the like of the light. The method has large calculated amount, and can only realize acousto-optic matching and only realize the control of part of simple music on light.
Therefore, the method comprises the steps of obtaining a markup language file of a staff of music to be matched; extracting music information in the markup language file, wherein the music information comprises at least one of the following information: tone information, loudness information, frequency information, rhythm information, and tone information; the light or video is controlled based on the music information. By means of the staff markup language file, key music information in the file can be accurately and quickly extracted, and therefore the effect of controlling visual media through the music information in a more efficient mode can be achieved.
In this embodiment of the application, the execution subject may be a visual media device in a visual media system, and in actual application, the visual media device may be an electronic device such as a cabin processor device, a terminal device, and a server, which is not limited herein.
The method for controlling visual media according to the embodiment of the present application is described in detail below with reference to fig. 1.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for controlling visual media according to an embodiment of the present application, where the method for controlling visual media shown in fig. 1 includes:
step 110: and acquiring a markup language file of the staff of the music to be matched.
The visual media may be media devices such as an atmosphere lamp, a television, a projector, a mobile phone and the like, the music to be matched may be music provided by a user or music stored in a system, and the markup language may be a file in an XML format or other markup language files.
In some embodiments of the present application, obtaining a markup language file of a staff of music to be matched comprises: editing the staff of the music to be matched through a staff editor, and exporting a markup language file; or playing the music to be matched through one or more instruments and deriving a markup language file of the staff.
This application can directly edit out the staff that corresponds the target music through staff editor at above-mentioned in-process, exports markup language file again, also can directly export the markup language file of staff through electronic musical instrument, acquires the markup language file through multiple mode, and then realizes the control to video and light.
The staff of the music to be matched is edited through the staff editor, the staff can be generated by editing the staff through the APP of the terminal equipment by a user, and the staff can also be connected or integrated with the staff editing application program through a musical instrument. The musical instrument may be a piano, guitar, violin, saxophone, flute, or the like. In some embodiments of the present application, when the embodiment is applied to a cabin, the light is atmosphere light of the cabin, and the video is video played on a screen of the cabin; when this embodiment is applied to indoorly, light is indoor atmosphere lamp light, and the video is the video of indoor intelligent TV or projecting apparatus broadcast.
In the process, the method can be applied to a plurality of scenes, and different visual media can be controlled through music in different scenes.
Wherein the cabin may be a cabin in a car.
Step 120: music information in a markup language file is extracted.
Wherein the music information includes at least one of the following information: pitch information, loudness information, frequency information, tempo information, and tone information.
Step 130: the light or video is controlled based on the music information.
In some embodiments of the present application, controlling lights or videos based on music information includes: inputting music information into a preset algorithm matching model to obtain control information; the light or video is controlled based on the control information.
In the process, the extracted music information and some information of the videos or the light can be matched through the algorithm matching model, and then the videos or the light is controlled through the music.
Wherein the control information includes: light control information or video control information, the light control information including at least one of: color control information, intensity control information, frequency control information, rhythm control information, brightness control information, and on-off control information of the light, the video control information including at least one of the following information: motion control information, rhythm control information, and animation frame control information of the video.
In some embodiments of the present application, before inputting the music information into the preset algorithm matching model to obtain the control information, the method shown in fig. 1 further includes: acquiring one or more music information samples for controlling light or video; and training the existing model through one or more music information samples to obtain an algorithm matching model.
In the process, the model is trained through one or more music information samples and corresponding standard light control information and standard video control information, and the model can have a function of matching music information with videos or some information of light.
The music information sample comprises one or more pieces of music information and standard light control information and standard video control information corresponding to the one or more pieces of music information.
In addition, the corresponding information can be directly matched by calling the function through the algorithm matching model, and the efficiency is higher. Meanwhile, the acquired audio information is completely accurate, so that the visual effect of the audio can be richer. For example, XML information of symphony is utilized to realize a light effect with more diversified levels and rhythms; and also can call the body movement of the character in the animation and the virtual image (such as the holographic image) according to the markup language information of the audio so as to complete the visual dance accompanying and the like. Of course, the staff can be authored by the staff, and the performance visualization effect can be verified through the mark-up language and the matching model.
In the process shown in fig. 1, the key music information in the file can be accurately and quickly extracted through the markup language file of the staff, so that the effect of controlling the visual media through the music information can be more efficiently achieved.
The method for generating music according to the embodiment of the present application is described in detail below with reference to fig. 2.
Referring to fig. 2, fig. 2 is a flowchart of a method for generating music according to an embodiment of the present application, where the method for generating music shown in fig. 2 includes:
step 210: and extracting the information to be matched in the file to be matched.
The file to be matched comprises a video to be matched or a script file of light to be matched, and the information to be matched comprises playing information of the video to be matched or conversion information of the light to be matched. The script file comprises information such as light color information, light intensity information, light frequency information, light rhythm information, light brightness information, light switch information and the like of light.
In some embodiments of the present application, the conversion information comprises at least one of: the system comprises light color information, light intensity information, light frequency information, light rhythm information, light brightness information and light switch information, wherein the playing information comprises at least one of the following information: video motion information, video cadence information, and video animation frame information.
In the above process, the conversion information and the playing information of the application include one or more information used for matching with the music information, and further generating the music.
Step 220: and acquiring corresponding music based on the information to be matched.
In the above embodiments of the present application, the generation of music may also be controlled by extracting information in the script file of the video and the light.
The information to be matched can be used for generating tone generation information, loudness generation information, frequency generation information, rhythm generation information and tone generation information of music, and then the music is generated according to the information or the most similar music is matched through a music library.
In some embodiments of the application, when the embodiments are applied to a cabin, the light to be matched is atmosphere light of the cabin, and the video to be matched is a video played by a screen of the cabin; when this embodiment is applied to indoorly, treat to match light for indoor atmosphere lamp light, treat to match the video for the video of indoor intelligent TV or projecting apparatus broadcast.
In the process, the method can be applied to a plurality of scenes, and corresponding music can be acquired through different visual media in different scenes.
In some embodiments of the present application, obtaining corresponding music based on the information to be matched includes: inputting information to be matched into a preset algorithm matching model to obtain music generation information, wherein the music generation information comprises at least one of the following information: tone generation information, loudness generation information, frequency generation information, rhythm generation information, and tone generation information; and acquiring corresponding music based on the music generation information.
In the process, the method and the device can also match the playing information of the video and the conversion information of the light with the music information of the music through the algorithm matching model, and then match the corresponding music or directly generate the music according to the obtained music information.
The method of controlling visual media and the method of generating music are described above with respect to fig. 1-2, and the system for controlling visual media is described below in conjunction with fig. 3.
The system for controlling visual media according to the embodiment of the present application is described in detail below with reference to fig. 3.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a system for controlling visual media according to an embodiment of the present application, where the system for controlling visual media shown in fig. 3 includes:
the device comprises a staff editing module, an audio storage module, a processing module, an acousto-optic matching module, an audio and animation effect matching module, a scene information input module, a loudspeaker module and a visual media module.
The staff editing module is used for editing staff through user editing, such as tone selection, note dragging, beat speed setting and the like, and finally generating a music XML file or other XML files.
And the audio storage module is used for storing music for controlling visual media and music XML files or other XML files matched with the music, and feeding back information and inputting audio according to the requirements of the processing module for the loudspeaker module to sound.
And the processing module is used for comprehensively processing the MusicXML file or other XML files of the audio storage module and the scene information input by the scene information input module, controlling the acousto-optic matching module and the audio and animation effect matching module to realize different visual effects according to different scene requirements, and receiving the matching information fed back by the acousto-optic matching module and the audio and animation effect matching module.
And the acousto-optic matching module is used for processing the extracted information sent by the processing module to obtain corresponding matching information and feeding the matching information back to the processing unit and the corresponding visual media.
And the audio and animation effect matching module is used for processing the extracted information sent by the processing module to obtain corresponding matching information and feeding the matching information back to the processing unit and the corresponding visual media.
And the scene information input module is used for inputting the scene requirement information to the processing module.
And the loudspeaker module receives the music audio signal sent by the audio storage module and sends out sound through the power amplifier drive.
And the visual media module is used for controlling the visual effect of the visual media according to the matching information sent by the acousto-optic matching module and the audio and animation effect matching module.
The scene information may be, for example, ambient light musical rhythm, and vehicle exterior light show. All information of the music including tempo, tune, etc. is extracted by the processing module, for example, the tempo information is 120 beats, and then the interval of every two adjacent quarter notes is 60min/120=0.5s. In addition, if the processing module associates the information in the MusicXML file with the audio and animation effect matching model, a new application scene can be expanded, for example, a cartoon image stream video is automatically generated according to music after a picture set is selected, and animation actions (such as limb actions or dance actions of virtual characters, door window opening and closing actions in a vehicle model and the like) are automatically generated according to music information after an animation model is selected.
The method of controlling visual media and the method of generating music are described above with reference to fig. 1-2, and the apparatus for controlling visual media and the method of generating music are described below with reference to fig. 4-5.
Referring to fig. 4, a schematic block diagram of anapparatus 400 for controlling visual media provided in the embodiments of the present application is shown, where theapparatus 400 may be a module, a program segment, or code on an electronic device. Theapparatus 400 corresponds to the above-mentioned embodiment of the method in fig. 1, and can perform various steps related to the embodiment of the method in fig. 1, and the specific functions of theapparatus 400 can be referred to the following description, and the detailed description is appropriately omitted here to avoid redundancy.
Optionally, theapparatus 400 includes:
an obtainingmodule 410, configured to obtain a markup language file of a staff of music to be matched;
an extractingmodule 420, configured to extract music information in the markup language file, where the music information includes at least one of the following information: tone information, loudness information, frequency information, rhythm information, and tone information;
and acontrol module 430 for controlling the light or video based on the music information.
Optionally, the control module is specifically configured to:
inputting music information into a preset algorithm matching model to obtain control information, wherein the control information comprises: light control information or video control information, the light control information including at least one of the following information: color control information, intensity control information, frequency control information, rhythm control information, brightness control information, and on-off control information of the light, the video control information including at least one of the following information: motion control information, rhythm control information and animation frame control information of the video; the light or video is controlled based on the control information.
Optionally, the apparatus further comprises:
the training module is used for acquiring one or more music information samples for controlling light or videos before the control module inputs the music information into a preset algorithm matching model to obtain control information, wherein the music information samples comprise one or more pieces of music information and standard light control information and standard video control information corresponding to the one or more pieces of music information; and training the existing model through one or more music information samples to obtain an algorithm matching model.
Optionally, the obtaining module is specifically configured to:
editing the staff of the music to be matched through a staff editor, and exporting a markup language file; or playing the music to be matched through one or more instruments and deriving a markup language file of the staff.
Optionally, when the device is applied to a cabin, the light is atmosphere light of the cabin, and the video is a video played by a screen of the cabin; when the device is applied to indoorly, light is indoor atmosphere lamp light, and the video is the video of indoor intelligent TV or projecting apparatus broadcast.
Referring to fig. 5, a schematic block diagram of anapparatus 500 for generating music provided in the embodiment of the present application is shown, where theapparatus 500 may be a module, a program segment, or code on an electronic device. Theapparatus 500 corresponds to the above-mentioned embodiment of the method in fig. 2, and can perform various steps related to the embodiment of the method in fig. 2, and specific functions of theapparatus 500 can be referred to the following description, and detailed descriptions are appropriately omitted here to avoid redundancy.
Optionally, theapparatus 500 includes:
an extractingmodule 510, configured to extract information to be matched in a file to be matched, where the file to be matched includes a video to be matched or a script file of light to be matched, and the information to be matched is used to control generation of music;
an obtainingmodule 520, configured to obtain corresponding music based on the information to be matched.
Optionally, the conversion information includes at least one of the following information: light color information, light intensity information, light frequency information, light rhythm information, light brightness information and light switch information, the broadcast information includes at least one of the following information: video motion information, video cadence information, and video animation frame information.
Optionally, the obtaining module is specifically configured to:
inputting information to be matched into a preset algorithm matching model to obtain music generation information, wherein the music generation information comprises at least one of the following information: tone generation information, loudness generation information, frequency generation information, rhythm generation information, and timbre generation information; and acquiring corresponding music based on the music generation information.
Optionally, when the device is applied to the cabin, the light to be matched is atmosphere light of the cabin, and the video to be matched is video played by a screen of the cabin; when the device is applied to indoorly, treat to match light and be indoor atmosphere lamp light, treat to match the video and be the video of indoor intelligent TV or projecting apparatus broadcast.
Referring to fig. 6, a block diagram of an apparatus 600 for controlling visual media provided in an embodiment of the present application may include amemory 610 and aprocessor 620. Optionally, the apparatus may further include: acommunication interface 630, and a communication bus 640. The apparatus corresponds to the above-mentioned embodiment of the method of fig. 1, and can perform various steps related to the embodiment of the method of fig. 1, and specific functions of the apparatus can be referred to the following description.
In particular,memory 610 is used to store computer readable instructions.
Aprocessor 620 for processing the memory-stored readable instructions is capable of performing the various steps of the method of fig. 1.
Acommunication interface 630 is used for communicating signaling or data with other node devices. For example: the method and the device for communication with the server or the terminal, or with other device nodes are used, and the embodiments of the application are not limited thereto.
And the communication bus 640 is used for realizing direct connection communication of the components.
In this embodiment, thecommunication interface 630 of the device in this application is used for performing signaling or data communication with other node devices. Thememory 610 may be a high-speed RAM memory or a non-volatile memory, such as at least one disk memory. Thememory 610 may optionally be at least one memory device located remotely from the aforementioned processor. Thememory 610 stores computer readable instructions, which when executed by theprocessor 620, cause the electronic device to perform the above-described method processes of fig. 1. Aprocessor 620 may be used on theapparatus 400 and to perform the functions herein. TheProcessor 620 may be, for example, a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component, and the embodiments of the present Application are not limited thereto.
Referring to fig. 7, a schematic block diagram of an apparatus 700 for generating music provided in an embodiment of the present application may include amemory 710 and aprocessor 720. Optionally, the apparatus may further include: acommunication interface 730, and a communication bus 740. The apparatus corresponds to the above-mentioned embodiment of the method of fig. 2, and can perform various steps related to the embodiment of the method of fig. 2, and specific functions of the apparatus can be referred to the following description.
In particular,memory 710 is used to store computer readable instructions.
Processor 720, for processing the memory-stored readable instructions, is capable of performing the various steps of the method of fig. 2.
Acommunication interface 730 for communicating signaling or data with other node devices. For example: the method and the device for communication with the server or the terminal, or with other device nodes are used, and the embodiments of the application are not limited thereto.
And a communication bus 740 for realizing direct connection communication of the above components.
In this embodiment, thecommunication interface 730 of the device in this application is used for performing signaling or data communication with other node devices.Memory 710 may be a high-speed RAM memory or a non-volatile memory, such as at least one disk memory. Thememory 710 may optionally be at least one memory device located remotely from the processor. Thememory 710 stores computer readable instructions, which when executed by theprocessor 720, cause the electronic device to perform the method processes described above with reference to fig. 2. Aprocessor 720 may be used on theapparatus 500 and to perform the functions herein. TheProcessor 720 may be, for example, a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component, and the embodiments of the present Application are not limited thereto.
Embodiments of the present application further provide a readable storage medium, and when being executed by a processor, the computer program performs a method process performed by an electronic device in the method embodiment shown in fig. 1 or fig. 2.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method, and will not be described in too much detail herein.
To sum up, the embodiment of the present application provides a method for controlling visual media, a method for generating music and a device thereof, wherein the method includes acquiring a markup language file of a staff of music to be matched; extracting music information in the markup language file, wherein the music information comprises at least one of the following information: tone information, loudness information, frequency information, rhythm information, and tone information; the light or video is controlled based on the music information. By the method, the effect of controlling the visual media more efficiently can be achieved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.