Background
With the continuous development and popularization of the functions of electronic devices, listening to audio, watching video and images by using electronic devices are very common applications. Due to the continuous maturity of various technologies of electronic devices, people have higher and higher requirements on the picture output effect of videos and images and the sound effect of audio. However, since the types and styles of output objects such as video and audio are various, it is difficult for the current electronic devices to satisfy the experience of different output objects by using the same output parameters.
Disclosure of Invention
The invention mainly aims to provide an output effect adjusting method, an output effect adjusting device, output effect adjusting equipment and a computer readable storage medium, and aims to solve the technical problem that the current electronic equipment is difficult to meet the visual experience of different output objects by adopting the same output parameters.
In order to achieve the above object, the present invention provides an output effect adjusting method, including the steps of:
acquiring an output effect label corresponding to an output object;
acquiring a target output parameter matched with the output effect label;
and outputting the output object according to the target output parameter.
Optionally, the method is applied to a client, the client establishes a communication connection with a server, and the step of obtaining an output effect tag corresponding to an output object includes:
sending a data request aiming at the output object to the server, so that the server can obtain the description data of the output object from a database according to the data request and return the description data;
and analyzing the received description data to obtain an output effect label.
Optionally, the step of obtaining the target output parameter matched with the output effect tag includes:
and searching a target output parameter matched with the output effect label from a preset mapping relation table, wherein the preset mapping relation table comprises output parameters respectively corresponding to various output effect labels.
Optionally, before the step of obtaining the output effect tag corresponding to the output object, the method further includes:
receiving a new mapping entry sent by the server, wherein the new mapping entry comprises a new type of output effect label and an output parameter corresponding to the new type of output effect label;
and adding the new mapping entry to the preset mapping relation table.
Optionally, when the output object is a video, the step of obtaining an output effect tag corresponding to the output object includes:
extracting picture frames from the video;
and carrying out image analysis on the picture frame to obtain an analysis result, and generating an output effect label of the video according to the analysis result.
Optionally, the step of performing image analysis on the picture frame to obtain an analysis result, and generating an output effect tag of the video according to the analysis result includes:
analyzing the image brightness of the image frame to obtain the brightness value of the image frame;
and selecting the brightness label matched with the brightness value from the alternative brightness labels as an output effect label of the video.
Optionally, the step of outputting the output object according to the target output parameter includes:
and calling a parameter adjusting interface to adjust the output parameters of the output equipment to the target output parameters so as to output the output object in the output equipment by using the target output parameters.
To achieve the above object, the present invention also provides an output effect adjusting apparatus, comprising:
the first acquisition module is used for acquiring an output effect label corresponding to an output object;
the second acquisition module is used for acquiring target output parameters matched with the output effect labels;
an output module for outputting the output object according to the target output parameter
To achieve the above object, the present invention also provides an output effect adjustment apparatus including: a memory, a processor and an output effect adjustment program stored on the memory and executable on the processor, the output effect adjustment program when executed by the processor implementing the steps of the output effect adjustment method as described above.
Further, to achieve the above object, the present invention also proposes a computer-readable storage medium having stored thereon an output effect adjustment program which, when executed by a processor, implements the steps of the output effect adjustment method as described above.
According to the method and the device, the target output parameters matched with the output effect tags are obtained by obtaining the output effect tags corresponding to the output objects, and the output objects are output according to the target output parameters, so that the output objects are output by adopting different output parameters aiming at different types and types of output objects, the output objects of different types can have the best output effect, and the impression experience of users is improved. In addition, at present, the mode of carrying out real-time analysis in the process of outputting an object by an artificial intelligence algorithm to adjust the output effect has high calculation force requirements on hardware equipment, and a common smart phone client cannot meet the requirements and cannot be applied; compared with the scheme, the output object is labeled in the invention, so that when the output object is required to be output, only the corresponding output effect label is required to be obtained, the output parameter is determined according to the label, the object is output according to the output parameter, a better output effect can be achieved, no extra requirement is required on hardware calculation force, no extra hardware cost is required, and the application range is wider.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present invention.
It should be noted that the output effect adjusting device in the embodiment of the present invention may be a smart phone, a personal computer, a server, and the like, and the device may be deployed in a robot, which is not limited herein.
As shown in fig. 1, the output effect adjusting apparatus may include: aprocessor 1001, such as a CPU, anetwork interface 1004, auser interface 1003, amemory 1005, acommunication bus 1002. Wherein acommunication bus 1002 is used to enable connective communication between these components. Theuser interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and theoptional user interface 1003 may also include a standard wired interface, a wireless interface. Thenetwork interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). Thememory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). Thememory 1005 may alternatively be a storage device separate from theprocessor 1001.
Those skilled in the art will appreciate that the configuration of the apparatus shown in fig. 1 does not constitute a limitation of the output effect adjustment apparatus and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, amemory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an output effect adjustment program. The operating system is a program that manages and controls the hardware and software resources of the device, supporting the execution of the output effect adjustment program as well as other software or programs. In the device shown in fig. 1, theuser interface 1003 is mainly used for data communication with a client; thenetwork interface 1004 is mainly used for establishing communication connection with a server; and theprocessor 1001 may be configured to call the output effect adjustment program stored in thememory 1005 and perform the following operations:
acquiring an output effect label corresponding to an output object;
acquiring a target output parameter matched with the output effect label;
and outputting the output object according to the target output parameter.
Further, the method is applied to a client, the client establishes communication connection with a server, and the step of obtaining an output effect tag corresponding to an output object includes:
sending a data request aiming at the output object to the server, so that the server can obtain the description data of the output object from a database according to the data request and return the description data;
and analyzing the received description data to obtain an output effect label.
Further, the step of obtaining the target output parameter matching with the output effect tag comprises:
and searching a target output parameter matched with the output effect label from a preset mapping relation table, wherein the preset mapping relation table comprises output parameters respectively corresponding to various output effect labels.
Further, before the step of obtaining the output effect tag corresponding to the output object, theprocessor 1001 may be further configured to call the output effect adjustment program stored in thememory 1005, and perform the following operations:
receiving a new mapping entry sent by the server, wherein the new mapping entry comprises a new type of output effect label and an output parameter corresponding to the new type of output effect label;
and adding the new mapping entry to the preset mapping relation table.
Further, when the output object is a video, the step of obtaining an output effect tag corresponding to the output object includes:
extracting picture frames from the video;
and carrying out image analysis on the picture frame to obtain an analysis result, and generating an output effect label of the video according to the analysis result.
Further, the step of analyzing the image of the picture frame to obtain an analysis result, and generating an output effect tag of the video according to the analysis result includes:
analyzing the image brightness of the image frame to obtain the brightness value of the image frame;
and selecting the brightness label matched with the brightness value from the alternative brightness labels as an output effect label of the video.
Further, the step of outputting the output object according to the target output parameter includes:
and calling a parameter adjusting interface to adjust the output parameters of the output equipment to the target output parameters so as to output the output object in the output equipment by using the target output parameters.
Based on the above structure, various embodiments of the output effect adjustment method are proposed.
Referring to fig. 2, fig. 2 is a flowchart illustrating an output effect adjusting method according to a first embodiment of the present invention.
While a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in an order different than that shown or described herein. The execution subject of each embodiment of the output effect adjustment method of the present invention may be a device such as a smart phone, a personal computer, and a server, and for convenience of description, the following embodiments use a client as the execution subject to be explained. In this embodiment, the output effect adjusting method includes:
step S10, acquiring an output effect label corresponding to the output object;
in the present embodiment, the output object may be video, audio, or image; outputting the video and the image means outputting the video and the image to a display device connected with the client, and displaying the content of the video and the image in the display device; outputting the audio refers to outputting the audio to a sound output device such as a sound box, a loudspeaker or an earphone connected to the client, and playing the audio content in the sound output device.
The output object may be stored locally by the client, or may be an output object such as a network video or an image acquired from the server.
When the client needs to output the output object, the client can first obtain the output effect tag corresponding to the output object. Various output effect tags are predefined, and the output effects that should be presented for output objects of different kinds of tags are different. It should be noted that, for video and image, the output effect label may be referred to as a display effect label, and for audio, may be referred to as a sound effect label.
For example, the display effects for a drama and a movie should be different, so for an output object such as a video, two display effect tags of "drama" and "movie" can be defined; some video frames are brighter, and some video frames are darker, so the corresponding display effect should be different, so for the output objects such as video, the "bright" display effect label can also be defined. Also, for example, the sound effects for the lyric music and the rock music may be different, so that for output objects such as audio, sound effect labels may be defined that relate to the type of music, such as "lyric" and "rock".
The output effect label can be marked on the output object in advance in a manual labeling mode or an automatic labeling mode. The automatic labeling can be performed by analyzing the file name corresponding to the output object, or analyzing the content data of the output object and labeling according to the analysis result. For example, when the output object is a video, the file name corresponding to the video generally identifies the type of the video, for example, whether the video is a tv show or a movie, and by text matching of the file names, it can be determined whether to play a display effect label of "tv show" or a display effect label of "movie". For another example, when the output object is an audio, the audio waveform of the audio data may be analyzed to determine a music type, and then a sound effect tag corresponding to the music type is marked, for example, the amplitude and the frequency of the audio waveform may be analyzed, and if the amplitude reaches a certain amplitude and the frequency reaches a certain frequency, it may be determined that the audio belongs to a rock type, and then a sound effect tag of a rock is marked.
It should be noted that a plurality of output effect labels can be printed on one output object, for example, two output effect labels of "movie" and "bright" can be printed on one video.
Step S20, acquiring target output parameters matched with the output effect label;
output parameters corresponding to various output effect labels are preset in the client; the output parameter can be set according to experience, so that when the output object of the output effect label is output by the output parameter, the optimal output effect can be obtained; moreover, because the output devices corresponding to the clients may have differences in software and hardware configurations, when the same output object is displayed on different output devices, different display parameters may be required to present the best output effect, and thus, the output parameters set in different clients may be different. The output parameter corresponding to the output effect label can be a plurality of parameter values or a predetermined output mode; for example, for an output object such as a video, the output parameter may be a specific value of a display parameter of the items comprising the display, or may be one of a plurality of display modes of the display.
It should be noted that, when one output object can correspond to multiple output effect tags, output parameters corresponding to different output effect tag combinations can be preset in the client.
The client can determine the output parameters matched with the output effect labels of the output objects according to the preset corresponding relation, and the output parameters are called target output parameters.
Step S30, outputting the output object according to the target output parameter.
And the client outputs the output object according to the target output parameter. Specifically, the client adjusts the output parameter of the output device to the target output parameter, and then outputs the output object to the output device, so that the output device outputs the output object. When the output object is a video or an image, the client adjusts the display parameters of the display equipment to the display parameter values or the display modes corresponding to the target output parameters, and outputs the video data or the image data to the display equipment so as to enable the display equipment to display the video or the image. When the output object is audio, the client adjusts the sound parameters of the sound output equipment to the sound parameter values or the sound modes corresponding to the target output parameters, and outputs the audio data to the sound output equipment so that the sound output equipment can output the audio.
Further, the step S30 includes:
step S301, calling a parameter adjusting interface to adjust the output parameter of the output device to the target output parameter, so that the output device outputs the output object according to the target output parameter.
In an embodiment, an interface for adjusting various output parameters, that is, a parameter adjustment interface, is provided in the client, and after the target output parameter is determined, the client may invoke the parameter adjustment structure to adjust the output parameter of the output device to the target output parameter, so that the output device outputs the output object with the target output parameter. It should be noted that, if different output parameters have different interfaces, the client calls the interface corresponding to the target output parameter to implement parameter adjustment.
In this embodiment, the target output parameters matched with the output effect tags are obtained by obtaining the output effect tags corresponding to the output objects, and the output objects are output according to the target output parameters, so that the output objects of different types and different types are output by adopting different output parameters, thereby realizing that the output objects of different types can have the best output effect, and improving the visual and sensory experience of users. In addition, at present, the mode of carrying out real-time analysis in the process of outputting an object by an artificial intelligence algorithm to adjust the output effect has high calculation force requirements on hardware equipment, and a common smart phone client cannot meet the requirements and cannot be applied; compare in this scheme, through the mode of marking the output object in this embodiment for when needs export the output object, only need obtain corresponding output effect label, confirm output parameter according to the label, can reach better output effect according to output parameter output object, do not have extra requirement to the hardware power of calculation, thereby need not additionally increase the hardware cost, application scope is wider.
Further, a second embodiment of the output effect adjustment method of the present invention is proposed based on the above-described first embodiment, and in this embodiment, the step S10 includes:
step S101, sending a data request aiming at the output object to the server, so that the server can obtain the description data of the output object from a database according to the data request and return the description data;
in this embodiment, each resource (video, image, or audio) in the server may be marked with an output effect tag in advance, and the output effect tag is added to the description data of the resource. Specifically, each resource in the database of the current server has corresponding description data, such as description data describing the type of the resource, the data size of the resource, and the like; in this embodiment, an output effect tag may be added to the existing description data, for example, the tag may be added by using add function of the database. The method can be manually operated in the database to label each resource, and can also be automatically operated in the database to label the resources by analyzing the resources by the server.
When the client requests the resource from the server, the server returns the resource and the description data with the output effect label to the client; or after the client acquires the resource, when the resource is taken as an output object, the client sends a data request for requesting description data to the server independently aiming at the output object, and the server returns the description data to the client.
And step S102, analyzing the received description data to obtain an output effect label.
And after receiving the description data corresponding to the output object, the client analyzes the description data to obtain an output effect label. The data format of the description data may adopt a common data format, for example, a json (lightweight data exchange format) data format, and the manner of analyzing the description data may adopt an existing analyzing manner, which is not described in detail herein.
Fig. 3 is a schematic view illustrating a video display effect adjustment process. The client can establish connection with the server through a universal HTTP (Hypertext Transfer Protocol) network Protocol, the client requests background data, the server issues videos and description data to the client, the client analyzes the description data to obtain the content of a field of a display effect label, target display parameters are determined according to the content of the field, and a corresponding display parameter adjusting interface is called to adjust the display parameters so as to display the videos in the display equipment according to the target display parameters.
In this embodiment, the output effect tags are uniformly marked on the resources in the server and added to the existing resource description data, so that the client only needs to perform conventional analysis on the description data of the output object to obtain the output effect tags, and then the appropriate output parameters can be matched according to the output effect tags, thereby outputting the output object with the optimal output effect in the client. No extra requirement is required for the hardware computing power of the client, so that the hardware cost is reduced, and the application range is wider.
Further, the step S20 includes:
step S201, searching a preset mapping relation table for a target output parameter matched with the output effect tag, where the preset mapping relation table includes output parameters corresponding to various output effect tags.
In an embodiment, the mapping relationship table may be used to record output parameters corresponding to each type of output effect tag. After the client acquires the output effect label corresponding to the output object, the client searches a target output parameter matched with the output effect label in the mapping relation table.
Further, the method further comprises:
step S40, receiving a new mapping entry sent by the server, where the new mapping entry includes a new type of output effect tag and an output parameter corresponding to the new type of output effect tag;
in an embodiment, if a new type of output effect tag is added to the server, for example, for an output object such as a video, a display effect tag of "soft" is newly added, an output parameter corresponding to the output effect tag may also be added to the server, and the server sends the output effect tag and the output parameter to the client as a new mapping entry. It should be noted that, since different clients may set different output parameters for different clients due to different hardware configurations of the output device, the new mapping entries sent by the server to different clients are different.
Further, the server may send a new mapping entry corresponding to the new output effect tag to the client when the new output effect tag is included in the output effect tags corresponding to the resources requested by the client.
And step S50, adding the new mapping entry to the preset mapping relation table.
And the client adds the received new mapping item into the mapping relation table so that when the new output effect tag exists in the output effect tags corresponding to the subsequent output objects, the output parameters corresponding to the new output effect tag can be searched from the mapping relation table, and therefore the adjustment of the output effect of the new output object is realized.
In an embodiment, when an output effect tag is newly added in the server, corresponding output parameters may also be synchronously added in the client. Compared with the mode, the mode of sending the new mapping entry to the client through the server can save programs developed by the client manually, and the client does not need to update versions.
Further, based on the first and/or second embodiments, a third embodiment of the output effect adjustment method of the present invention is provided, in this embodiment, the step S10 includes:
step S103, extracting picture frames from the video;
and step S104, carrying out image analysis on the picture frame to obtain an analysis result, and generating an output effect label of the video according to the analysis result.
In this embodiment, when the output object is a video, the client may extract a picture frame from the video. The frame extraction may be randomly extracting one frame, or randomly extracting multiple frames, or may be presetting the number of frames required to extract the frame, and then determining the interval of the extracted frame according to the total length of the video, that is, extracting each frame from the video at equal intervals.
And the client analyzes the image of the extracted picture frame to obtain an analysis result, and generates an output effect label of the video according to the analysis result. According to different predefined display effect labels, different image analyses can be performed on the picture frame to determine whether to print the corresponding display effect label on the video. For example, when the display effect label of "soft" is predefined, the client may perform image sharpness analysis on the picture frames to determine sharpness values of the picture frames, and when there are multiple picture frames, an average value may be calculated from the sharpness values of the respective picture frames, and the average value is used as the sharpness value of the video; the client compares the sharpness value of the video with a preset value, if the sharpness value is smaller than the preset value, the video picture is determined to be soft, a soft display effect label is generated for the video, and otherwise, the label is not marked; or, a plurality of softness labels are predefined, correspond to different softness, and sharpness values matched with the different softness labels are set, after the sharpness values of the video are obtained through analysis by the client, the sharpness values can be compared with sharpness values corresponding to the softness labels to determine the softness labels matched with the sharpness values of the video, and the matched softness labels are used as display effect labels of the video.
Further, the step S104 includes:
step S1041, analyzing the image brightness of the picture frame to obtain the brightness value of the picture frame;
in one embodiment, different brightness labels may be predefined, and different brightness values are set for each brightness label. The client can analyze the image brightness of the extracted image frames to determine the brightness values of the image frames, and when a plurality of image frames exist, the average value of the brightness values of the image frames can be calculated and used as the brightness value of the video.
Step S1042, selecting a brightness label matching the brightness value from the alternative brightness labels as an output effect label of the video.
And taking each brightness label as a candidate brightness label, and selecting the brightness label matched with the brightness value of the video from the candidate brightness labels as an output effect label (display effect label) of the video.
In this embodiment, the client may automatically generate an output effect tag of the video in a manner of extracting a picture frame from the video and performing image analysis on the picture frame, without manually tagging the output effect tag.
In another embodiment, the server may also automatically generate an output effect tag of the video by extracting the frame of the video and performing image analysis on the frame of the video.
In other embodiments, the client may first send a data request for a video to the server, obtain an output effect tag from the server, and generate the tag of the video in the automatic tagging manner when the output effect tag of the video does not exist in the server, thereby implementing adjustment of the display effect of the video.
In addition, an embodiment of the present invention further provides an output effect adjusting apparatus, and referring to fig. 4, the apparatus includes:
the first obtainingmodule 10 is configured to obtain an output effect tag corresponding to an output object;
a second obtainingmodule 20, configured to obtain a target output parameter matched with the output effect tag; anoutput module 30 for outputting the output object according to the target output parameter
Further, the apparatus is deployed in a client, the client establishes a communication connection with a server, and the first obtainingmodule 10 includes:
the sending unit is used for sending a data request aiming at the output object to the server, so that the server can obtain the description data of the output object from a database according to the data request and return the description data;
and the analysis unit is used for analyzing the received description data to obtain an output effect label.
Further, the second obtainingmodule 20 includes:
and the searching unit is used for searching the target output parameters matched with the output effect labels from a preset mapping relation table, wherein the preset mapping relation table comprises output parameters respectively corresponding to various output effect labels.
Further, the apparatus further comprises:
a receiving module, configured to receive a new mapping entry sent by the server, where the new mapping entry includes a new type of output effect tag and an output parameter corresponding to the new type of output effect tag;
and the adding module is used for adding the new mapping item to the preset mapping relation table.
Further, when the output object is a video, the first obtainingmodule 10 includes:
an extraction unit for extracting a picture frame from the video;
and the analysis unit is used for carrying out image analysis on the picture frame to obtain an analysis result and generating an output effect label of the video according to the analysis result.
Further, the analysis unit includes:
the analysis subunit is used for carrying out image brightness analysis on the picture frame to obtain a brightness value of the picture frame;
and the determining subunit is used for selecting the brightness label matched with the brightness value from the alternative brightness labels as the output effect label of the video.
Further, theoutput module 30 includes:
and the calling unit is used for calling a parameter adjusting interface to adjust the output parameters of the output equipment to the target output parameters so as to output the output object in the output equipment according to the target output parameters.
The specific implementation of the output effect adjusting apparatus of the present invention is basically the same as the embodiments of the output effect adjusting method, and is not described herein again.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where an output effect adjustment program is stored on the storage medium, and when the output effect adjustment program is executed by a processor, the steps of the output effect adjustment method are implemented as follows.
The embodiments of the output effect adjustment device and the computer-readable storage medium of the present invention can refer to the embodiments of the output effect adjustment method of the present invention, and are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.