Movatterモバイル変換


[0]ホーム

URL:


CN112235633A - Output effect adjustment method, apparatus, device, and computer-readable storage medium - Google Patents

Output effect adjustment method, apparatus, device, and computer-readable storage medium
Download PDF

Info

Publication number
CN112235633A
CN112235633ACN202011109704.4ACN202011109704ACN112235633ACN 112235633 ACN112235633 ACN 112235633ACN 202011109704 ACN202011109704 ACN 202011109704ACN 112235633 ACN112235633 ACN 112235633A
Authority
CN
China
Prior art keywords
output
output effect
effect
label
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011109704.4A
Other languages
Chinese (zh)
Inventor
佟林府
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Skyworth RGB Electronics Co Ltd
Original Assignee
Shenzhen Skyworth RGB Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Skyworth RGB Electronics Co LtdfiledCriticalShenzhen Skyworth RGB Electronics Co Ltd
Priority to CN202011109704.4ApriorityCriticalpatent/CN112235633A/en
Publication of CN112235633ApublicationCriticalpatent/CN112235633A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种输出效果调节方法、装置、设备及计算机可读存储介质,所述方法包括获取输出对象对应的输出效果标签;获取与所述输出效果标签匹配的目标输出参数;按照所述目标输出参数输出所述输出对象。本发明实现了针对不同类型不同种类的输出对象,采用不同的输出参数输出该输出对象,从而实现不同种类的输出对象都能有最佳的输出效果,提升了用户的观感体验。

Figure 202011109704

The invention discloses an output effect adjustment method, device, equipment and computer-readable storage medium. The method includes obtaining an output effect label corresponding to an output object; obtaining a target output parameter matching the output effect label; The target output parameter outputs the output object. The invention realizes that for different types and different types of output objects, different output parameters are used to output the output objects, so that different types of output objects can have the best output effect, and the user's perception experience is improved.

Figure 202011109704

Description

Output effect adjusting method, device, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for adjusting an output effect.
Background
With the continuous development and popularization of the functions of electronic devices, listening to audio, watching video and images by using electronic devices are very common applications. Due to the continuous maturity of various technologies of electronic devices, people have higher and higher requirements on the picture output effect of videos and images and the sound effect of audio. However, since the types and styles of output objects such as video and audio are various, it is difficult for the current electronic devices to satisfy the experience of different output objects by using the same output parameters.
Disclosure of Invention
The invention mainly aims to provide an output effect adjusting method, an output effect adjusting device, output effect adjusting equipment and a computer readable storage medium, and aims to solve the technical problem that the current electronic equipment is difficult to meet the visual experience of different output objects by adopting the same output parameters.
In order to achieve the above object, the present invention provides an output effect adjusting method, including the steps of:
acquiring an output effect label corresponding to an output object;
acquiring a target output parameter matched with the output effect label;
and outputting the output object according to the target output parameter.
Optionally, the method is applied to a client, the client establishes a communication connection with a server, and the step of obtaining an output effect tag corresponding to an output object includes:
sending a data request aiming at the output object to the server, so that the server can obtain the description data of the output object from a database according to the data request and return the description data;
and analyzing the received description data to obtain an output effect label.
Optionally, the step of obtaining the target output parameter matched with the output effect tag includes:
and searching a target output parameter matched with the output effect label from a preset mapping relation table, wherein the preset mapping relation table comprises output parameters respectively corresponding to various output effect labels.
Optionally, before the step of obtaining the output effect tag corresponding to the output object, the method further includes:
receiving a new mapping entry sent by the server, wherein the new mapping entry comprises a new type of output effect label and an output parameter corresponding to the new type of output effect label;
and adding the new mapping entry to the preset mapping relation table.
Optionally, when the output object is a video, the step of obtaining an output effect tag corresponding to the output object includes:
extracting picture frames from the video;
and carrying out image analysis on the picture frame to obtain an analysis result, and generating an output effect label of the video according to the analysis result.
Optionally, the step of performing image analysis on the picture frame to obtain an analysis result, and generating an output effect tag of the video according to the analysis result includes:
analyzing the image brightness of the image frame to obtain the brightness value of the image frame;
and selecting the brightness label matched with the brightness value from the alternative brightness labels as an output effect label of the video.
Optionally, the step of outputting the output object according to the target output parameter includes:
and calling a parameter adjusting interface to adjust the output parameters of the output equipment to the target output parameters so as to output the output object in the output equipment by using the target output parameters.
To achieve the above object, the present invention also provides an output effect adjusting apparatus, comprising:
the first acquisition module is used for acquiring an output effect label corresponding to an output object;
the second acquisition module is used for acquiring target output parameters matched with the output effect labels;
an output module for outputting the output object according to the target output parameter
To achieve the above object, the present invention also provides an output effect adjustment apparatus including: a memory, a processor and an output effect adjustment program stored on the memory and executable on the processor, the output effect adjustment program when executed by the processor implementing the steps of the output effect adjustment method as described above.
Further, to achieve the above object, the present invention also proposes a computer-readable storage medium having stored thereon an output effect adjustment program which, when executed by a processor, implements the steps of the output effect adjustment method as described above.
According to the method and the device, the target output parameters matched with the output effect tags are obtained by obtaining the output effect tags corresponding to the output objects, and the output objects are output according to the target output parameters, so that the output objects are output by adopting different output parameters aiming at different types and types of output objects, the output objects of different types can have the best output effect, and the impression experience of users is improved. In addition, at present, the mode of carrying out real-time analysis in the process of outputting an object by an artificial intelligence algorithm to adjust the output effect has high calculation force requirements on hardware equipment, and a common smart phone client cannot meet the requirements and cannot be applied; compared with the scheme, the output object is labeled in the invention, so that when the output object is required to be output, only the corresponding output effect label is required to be obtained, the output parameter is determined according to the label, the object is output according to the output parameter, a better output effect can be achieved, no extra requirement is required on hardware calculation force, no extra hardware cost is required, and the application range is wider.
Drawings
FIG. 1 is a schematic diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of an output effect adjustment method according to the present invention;
FIG. 3 is a schematic view of a video display effect adjustment process according to various embodiments of the present invention;
FIG. 4 is a functional block diagram of an output effect adjustment apparatus according to a preferred embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present invention.
It should be noted that the output effect adjusting device in the embodiment of the present invention may be a smart phone, a personal computer, a server, and the like, and the device may be deployed in a robot, which is not limited herein.
As shown in fig. 1, the output effect adjusting apparatus may include: aprocessor 1001, such as a CPU, anetwork interface 1004, auser interface 1003, amemory 1005, acommunication bus 1002. Wherein acommunication bus 1002 is used to enable connective communication between these components. Theuser interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and theoptional user interface 1003 may also include a standard wired interface, a wireless interface. Thenetwork interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). Thememory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). Thememory 1005 may alternatively be a storage device separate from theprocessor 1001.
Those skilled in the art will appreciate that the configuration of the apparatus shown in fig. 1 does not constitute a limitation of the output effect adjustment apparatus and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, amemory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an output effect adjustment program. The operating system is a program that manages and controls the hardware and software resources of the device, supporting the execution of the output effect adjustment program as well as other software or programs. In the device shown in fig. 1, theuser interface 1003 is mainly used for data communication with a client; thenetwork interface 1004 is mainly used for establishing communication connection with a server; and theprocessor 1001 may be configured to call the output effect adjustment program stored in thememory 1005 and perform the following operations:
acquiring an output effect label corresponding to an output object;
acquiring a target output parameter matched with the output effect label;
and outputting the output object according to the target output parameter.
Further, the method is applied to a client, the client establishes communication connection with a server, and the step of obtaining an output effect tag corresponding to an output object includes:
sending a data request aiming at the output object to the server, so that the server can obtain the description data of the output object from a database according to the data request and return the description data;
and analyzing the received description data to obtain an output effect label.
Further, the step of obtaining the target output parameter matching with the output effect tag comprises:
and searching a target output parameter matched with the output effect label from a preset mapping relation table, wherein the preset mapping relation table comprises output parameters respectively corresponding to various output effect labels.
Further, before the step of obtaining the output effect tag corresponding to the output object, theprocessor 1001 may be further configured to call the output effect adjustment program stored in thememory 1005, and perform the following operations:
receiving a new mapping entry sent by the server, wherein the new mapping entry comprises a new type of output effect label and an output parameter corresponding to the new type of output effect label;
and adding the new mapping entry to the preset mapping relation table.
Further, when the output object is a video, the step of obtaining an output effect tag corresponding to the output object includes:
extracting picture frames from the video;
and carrying out image analysis on the picture frame to obtain an analysis result, and generating an output effect label of the video according to the analysis result.
Further, the step of analyzing the image of the picture frame to obtain an analysis result, and generating an output effect tag of the video according to the analysis result includes:
analyzing the image brightness of the image frame to obtain the brightness value of the image frame;
and selecting the brightness label matched with the brightness value from the alternative brightness labels as an output effect label of the video.
Further, the step of outputting the output object according to the target output parameter includes:
and calling a parameter adjusting interface to adjust the output parameters of the output equipment to the target output parameters so as to output the output object in the output equipment by using the target output parameters.
Based on the above structure, various embodiments of the output effect adjustment method are proposed.
Referring to fig. 2, fig. 2 is a flowchart illustrating an output effect adjusting method according to a first embodiment of the present invention.
While a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in an order different than that shown or described herein. The execution subject of each embodiment of the output effect adjustment method of the present invention may be a device such as a smart phone, a personal computer, and a server, and for convenience of description, the following embodiments use a client as the execution subject to be explained. In this embodiment, the output effect adjusting method includes:
step S10, acquiring an output effect label corresponding to the output object;
in the present embodiment, the output object may be video, audio, or image; outputting the video and the image means outputting the video and the image to a display device connected with the client, and displaying the content of the video and the image in the display device; outputting the audio refers to outputting the audio to a sound output device such as a sound box, a loudspeaker or an earphone connected to the client, and playing the audio content in the sound output device.
The output object may be stored locally by the client, or may be an output object such as a network video or an image acquired from the server.
When the client needs to output the output object, the client can first obtain the output effect tag corresponding to the output object. Various output effect tags are predefined, and the output effects that should be presented for output objects of different kinds of tags are different. It should be noted that, for video and image, the output effect label may be referred to as a display effect label, and for audio, may be referred to as a sound effect label.
For example, the display effects for a drama and a movie should be different, so for an output object such as a video, two display effect tags of "drama" and "movie" can be defined; some video frames are brighter, and some video frames are darker, so the corresponding display effect should be different, so for the output objects such as video, the "bright" display effect label can also be defined. Also, for example, the sound effects for the lyric music and the rock music may be different, so that for output objects such as audio, sound effect labels may be defined that relate to the type of music, such as "lyric" and "rock".
The output effect label can be marked on the output object in advance in a manual labeling mode or an automatic labeling mode. The automatic labeling can be performed by analyzing the file name corresponding to the output object, or analyzing the content data of the output object and labeling according to the analysis result. For example, when the output object is a video, the file name corresponding to the video generally identifies the type of the video, for example, whether the video is a tv show or a movie, and by text matching of the file names, it can be determined whether to play a display effect label of "tv show" or a display effect label of "movie". For another example, when the output object is an audio, the audio waveform of the audio data may be analyzed to determine a music type, and then a sound effect tag corresponding to the music type is marked, for example, the amplitude and the frequency of the audio waveform may be analyzed, and if the amplitude reaches a certain amplitude and the frequency reaches a certain frequency, it may be determined that the audio belongs to a rock type, and then a sound effect tag of a rock is marked.
It should be noted that a plurality of output effect labels can be printed on one output object, for example, two output effect labels of "movie" and "bright" can be printed on one video.
Step S20, acquiring target output parameters matched with the output effect label;
output parameters corresponding to various output effect labels are preset in the client; the output parameter can be set according to experience, so that when the output object of the output effect label is output by the output parameter, the optimal output effect can be obtained; moreover, because the output devices corresponding to the clients may have differences in software and hardware configurations, when the same output object is displayed on different output devices, different display parameters may be required to present the best output effect, and thus, the output parameters set in different clients may be different. The output parameter corresponding to the output effect label can be a plurality of parameter values or a predetermined output mode; for example, for an output object such as a video, the output parameter may be a specific value of a display parameter of the items comprising the display, or may be one of a plurality of display modes of the display.
It should be noted that, when one output object can correspond to multiple output effect tags, output parameters corresponding to different output effect tag combinations can be preset in the client.
The client can determine the output parameters matched with the output effect labels of the output objects according to the preset corresponding relation, and the output parameters are called target output parameters.
Step S30, outputting the output object according to the target output parameter.
And the client outputs the output object according to the target output parameter. Specifically, the client adjusts the output parameter of the output device to the target output parameter, and then outputs the output object to the output device, so that the output device outputs the output object. When the output object is a video or an image, the client adjusts the display parameters of the display equipment to the display parameter values or the display modes corresponding to the target output parameters, and outputs the video data or the image data to the display equipment so as to enable the display equipment to display the video or the image. When the output object is audio, the client adjusts the sound parameters of the sound output equipment to the sound parameter values or the sound modes corresponding to the target output parameters, and outputs the audio data to the sound output equipment so that the sound output equipment can output the audio.
Further, the step S30 includes:
step S301, calling a parameter adjusting interface to adjust the output parameter of the output device to the target output parameter, so that the output device outputs the output object according to the target output parameter.
In an embodiment, an interface for adjusting various output parameters, that is, a parameter adjustment interface, is provided in the client, and after the target output parameter is determined, the client may invoke the parameter adjustment structure to adjust the output parameter of the output device to the target output parameter, so that the output device outputs the output object with the target output parameter. It should be noted that, if different output parameters have different interfaces, the client calls the interface corresponding to the target output parameter to implement parameter adjustment.
In this embodiment, the target output parameters matched with the output effect tags are obtained by obtaining the output effect tags corresponding to the output objects, and the output objects are output according to the target output parameters, so that the output objects of different types and different types are output by adopting different output parameters, thereby realizing that the output objects of different types can have the best output effect, and improving the visual and sensory experience of users. In addition, at present, the mode of carrying out real-time analysis in the process of outputting an object by an artificial intelligence algorithm to adjust the output effect has high calculation force requirements on hardware equipment, and a common smart phone client cannot meet the requirements and cannot be applied; compare in this scheme, through the mode of marking the output object in this embodiment for when needs export the output object, only need obtain corresponding output effect label, confirm output parameter according to the label, can reach better output effect according to output parameter output object, do not have extra requirement to the hardware power of calculation, thereby need not additionally increase the hardware cost, application scope is wider.
Further, a second embodiment of the output effect adjustment method of the present invention is proposed based on the above-described first embodiment, and in this embodiment, the step S10 includes:
step S101, sending a data request aiming at the output object to the server, so that the server can obtain the description data of the output object from a database according to the data request and return the description data;
in this embodiment, each resource (video, image, or audio) in the server may be marked with an output effect tag in advance, and the output effect tag is added to the description data of the resource. Specifically, each resource in the database of the current server has corresponding description data, such as description data describing the type of the resource, the data size of the resource, and the like; in this embodiment, an output effect tag may be added to the existing description data, for example, the tag may be added by using add function of the database. The method can be manually operated in the database to label each resource, and can also be automatically operated in the database to label the resources by analyzing the resources by the server.
When the client requests the resource from the server, the server returns the resource and the description data with the output effect label to the client; or after the client acquires the resource, when the resource is taken as an output object, the client sends a data request for requesting description data to the server independently aiming at the output object, and the server returns the description data to the client.
And step S102, analyzing the received description data to obtain an output effect label.
And after receiving the description data corresponding to the output object, the client analyzes the description data to obtain an output effect label. The data format of the description data may adopt a common data format, for example, a json (lightweight data exchange format) data format, and the manner of analyzing the description data may adopt an existing analyzing manner, which is not described in detail herein.
Fig. 3 is a schematic view illustrating a video display effect adjustment process. The client can establish connection with the server through a universal HTTP (Hypertext Transfer Protocol) network Protocol, the client requests background data, the server issues videos and description data to the client, the client analyzes the description data to obtain the content of a field of a display effect label, target display parameters are determined according to the content of the field, and a corresponding display parameter adjusting interface is called to adjust the display parameters so as to display the videos in the display equipment according to the target display parameters.
In this embodiment, the output effect tags are uniformly marked on the resources in the server and added to the existing resource description data, so that the client only needs to perform conventional analysis on the description data of the output object to obtain the output effect tags, and then the appropriate output parameters can be matched according to the output effect tags, thereby outputting the output object with the optimal output effect in the client. No extra requirement is required for the hardware computing power of the client, so that the hardware cost is reduced, and the application range is wider.
Further, the step S20 includes:
step S201, searching a preset mapping relation table for a target output parameter matched with the output effect tag, where the preset mapping relation table includes output parameters corresponding to various output effect tags.
In an embodiment, the mapping relationship table may be used to record output parameters corresponding to each type of output effect tag. After the client acquires the output effect label corresponding to the output object, the client searches a target output parameter matched with the output effect label in the mapping relation table.
Further, the method further comprises:
step S40, receiving a new mapping entry sent by the server, where the new mapping entry includes a new type of output effect tag and an output parameter corresponding to the new type of output effect tag;
in an embodiment, if a new type of output effect tag is added to the server, for example, for an output object such as a video, a display effect tag of "soft" is newly added, an output parameter corresponding to the output effect tag may also be added to the server, and the server sends the output effect tag and the output parameter to the client as a new mapping entry. It should be noted that, since different clients may set different output parameters for different clients due to different hardware configurations of the output device, the new mapping entries sent by the server to different clients are different.
Further, the server may send a new mapping entry corresponding to the new output effect tag to the client when the new output effect tag is included in the output effect tags corresponding to the resources requested by the client.
And step S50, adding the new mapping entry to the preset mapping relation table.
And the client adds the received new mapping item into the mapping relation table so that when the new output effect tag exists in the output effect tags corresponding to the subsequent output objects, the output parameters corresponding to the new output effect tag can be searched from the mapping relation table, and therefore the adjustment of the output effect of the new output object is realized.
In an embodiment, when an output effect tag is newly added in the server, corresponding output parameters may also be synchronously added in the client. Compared with the mode, the mode of sending the new mapping entry to the client through the server can save programs developed by the client manually, and the client does not need to update versions.
Further, based on the first and/or second embodiments, a third embodiment of the output effect adjustment method of the present invention is provided, in this embodiment, the step S10 includes:
step S103, extracting picture frames from the video;
and step S104, carrying out image analysis on the picture frame to obtain an analysis result, and generating an output effect label of the video according to the analysis result.
In this embodiment, when the output object is a video, the client may extract a picture frame from the video. The frame extraction may be randomly extracting one frame, or randomly extracting multiple frames, or may be presetting the number of frames required to extract the frame, and then determining the interval of the extracted frame according to the total length of the video, that is, extracting each frame from the video at equal intervals.
And the client analyzes the image of the extracted picture frame to obtain an analysis result, and generates an output effect label of the video according to the analysis result. According to different predefined display effect labels, different image analyses can be performed on the picture frame to determine whether to print the corresponding display effect label on the video. For example, when the display effect label of "soft" is predefined, the client may perform image sharpness analysis on the picture frames to determine sharpness values of the picture frames, and when there are multiple picture frames, an average value may be calculated from the sharpness values of the respective picture frames, and the average value is used as the sharpness value of the video; the client compares the sharpness value of the video with a preset value, if the sharpness value is smaller than the preset value, the video picture is determined to be soft, a soft display effect label is generated for the video, and otherwise, the label is not marked; or, a plurality of softness labels are predefined, correspond to different softness, and sharpness values matched with the different softness labels are set, after the sharpness values of the video are obtained through analysis by the client, the sharpness values can be compared with sharpness values corresponding to the softness labels to determine the softness labels matched with the sharpness values of the video, and the matched softness labels are used as display effect labels of the video.
Further, the step S104 includes:
step S1041, analyzing the image brightness of the picture frame to obtain the brightness value of the picture frame;
in one embodiment, different brightness labels may be predefined, and different brightness values are set for each brightness label. The client can analyze the image brightness of the extracted image frames to determine the brightness values of the image frames, and when a plurality of image frames exist, the average value of the brightness values of the image frames can be calculated and used as the brightness value of the video.
Step S1042, selecting a brightness label matching the brightness value from the alternative brightness labels as an output effect label of the video.
And taking each brightness label as a candidate brightness label, and selecting the brightness label matched with the brightness value of the video from the candidate brightness labels as an output effect label (display effect label) of the video.
In this embodiment, the client may automatically generate an output effect tag of the video in a manner of extracting a picture frame from the video and performing image analysis on the picture frame, without manually tagging the output effect tag.
In another embodiment, the server may also automatically generate an output effect tag of the video by extracting the frame of the video and performing image analysis on the frame of the video.
In other embodiments, the client may first send a data request for a video to the server, obtain an output effect tag from the server, and generate the tag of the video in the automatic tagging manner when the output effect tag of the video does not exist in the server, thereby implementing adjustment of the display effect of the video.
In addition, an embodiment of the present invention further provides an output effect adjusting apparatus, and referring to fig. 4, the apparatus includes:
the first obtainingmodule 10 is configured to obtain an output effect tag corresponding to an output object;
a second obtainingmodule 20, configured to obtain a target output parameter matched with the output effect tag; anoutput module 30 for outputting the output object according to the target output parameter
Further, the apparatus is deployed in a client, the client establishes a communication connection with a server, and the first obtainingmodule 10 includes:
the sending unit is used for sending a data request aiming at the output object to the server, so that the server can obtain the description data of the output object from a database according to the data request and return the description data;
and the analysis unit is used for analyzing the received description data to obtain an output effect label.
Further, the second obtainingmodule 20 includes:
and the searching unit is used for searching the target output parameters matched with the output effect labels from a preset mapping relation table, wherein the preset mapping relation table comprises output parameters respectively corresponding to various output effect labels.
Further, the apparatus further comprises:
a receiving module, configured to receive a new mapping entry sent by the server, where the new mapping entry includes a new type of output effect tag and an output parameter corresponding to the new type of output effect tag;
and the adding module is used for adding the new mapping item to the preset mapping relation table.
Further, when the output object is a video, the first obtainingmodule 10 includes:
an extraction unit for extracting a picture frame from the video;
and the analysis unit is used for carrying out image analysis on the picture frame to obtain an analysis result and generating an output effect label of the video according to the analysis result.
Further, the analysis unit includes:
the analysis subunit is used for carrying out image brightness analysis on the picture frame to obtain a brightness value of the picture frame;
and the determining subunit is used for selecting the brightness label matched with the brightness value from the alternative brightness labels as the output effect label of the video.
Further, theoutput module 30 includes:
and the calling unit is used for calling a parameter adjusting interface to adjust the output parameters of the output equipment to the target output parameters so as to output the output object in the output equipment according to the target output parameters.
The specific implementation of the output effect adjusting apparatus of the present invention is basically the same as the embodiments of the output effect adjusting method, and is not described herein again.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where an output effect adjustment program is stored on the storage medium, and when the output effect adjustment program is executed by a processor, the steps of the output effect adjustment method are implemented as follows.
The embodiments of the output effect adjustment device and the computer-readable storage medium of the present invention can refer to the embodiments of the output effect adjustment method of the present invention, and are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

Translated fromChinese
1.一种输出效果调节方法,其特征在于,所述方法包括:1. an output effect adjustment method, is characterized in that, described method comprises:获取输出对象对应的输出效果标签;Get the output effect label corresponding to the output object;获取与所述输出效果标签匹配的目标输出参数;obtaining the target output parameter matching the output effect label;按照所述目标输出参数输出所述输出对象。The output object is output according to the target output parameter.2.如权利要求1所述的输出效果调节方法,其特征在于,所述方法应用于客户端,所述客户端与服务端建立通信连接,所述获取输出对象对应的输出效果标签的步骤包括:2. The output effect adjustment method according to claim 1, wherein the method is applied to a client, the client establishes a communication connection with the server, and the step of obtaining the output effect label corresponding to the output object comprises the following steps: :向所述服务端发送针对所述输出对象的数据请求,以供所述服务端根据所述数据请求从数据库中获取所述输出对象的描述数据并返回;Sending a data request for the output object to the server, so that the server can obtain the description data of the output object from the database according to the data request and return it;对接收到的所述描述数据进行解析,得到输出效果标签。The received description data is parsed to obtain an output effect label.3.如权利要求2所述的输出效果调节方法,其特征在于,所述获取与所述输出效果标签匹配的目标输出参数的步骤包括:3. The output effect adjustment method according to claim 2, wherein the step of obtaining the target output parameter matched with the output effect label comprises:从预设映射关系表中查找与所述输出效果标签匹配的目标输出参数,其中,所述预设映射关系表包括各类输出效果标签分别对应的输出参数。A target output parameter matching the output effect label is searched from a preset mapping relationship table, wherein the preset mapping relationship table includes output parameters corresponding to various output effect labels respectively.4.如权利要求3所述的输出效果调节方法,其特征在于,所述获取输出对象对应的输出效果标签的步骤之前,还包括:4. The output effect adjustment method according to claim 3, wherein before the step of obtaining the output effect label corresponding to the output object, further comprising:接收所述服务器发送的新映射条目,其中,所述新映射条目中包括新类型的输出效果标签和所述新类型的输出效果标签对应的输出参数;receiving a new mapping entry sent by the server, wherein the new mapping entry includes a new type of output effect tag and an output parameter corresponding to the new type of output effect tag;将所述新映射条目添加至所述预设映射关系表。The new mapping entry is added to the preset mapping relationship table.5.如权利要求1所述的输出效果调节方法,其特征在于,当所述输出对象是视频时,所述获取输出对象对应的输出效果标签的步骤包括:5. The output effect adjustment method according to claim 1, wherein when the output object is a video, the step of obtaining the output effect label corresponding to the output object comprises:从所述视频中抽取画面帧;extracting frames from the video;对所述画面帧进行图像分析得到分析结果,根据所述分析结果生成所述视频的输出效果标签。Perform image analysis on the picture frame to obtain an analysis result, and generate an output effect label of the video according to the analysis result.6.如权利要求5所述的输出效果调节方法,其特征在于,所述对所述画面帧进行图像分析得到分析结果,根据所述分析结果生成所述视频的输出效果标签的步骤包括:6. The output effect adjustment method according to claim 5, wherein the image analysis is performed on the picture frame to obtain an analysis result, and the step of generating the output effect label of the video according to the analysis result comprises:对所述画面帧进行图像亮度分析得到所述画面帧的亮度值;Perform image brightness analysis on the picture frame to obtain the brightness value of the picture frame;从各备选明亮度标签中选取与所述亮度值匹配的明亮度标签作为所述视频的输出效果标签。The brightness label matching the brightness value is selected from the candidate brightness labels as the output effect label of the video.7.如权利要求1至6任一项所述的输出效果调节方法,其特征在于,所述按照所述目标输出参数输出所述输出对象的步骤包括:7. The output effect adjustment method according to any one of claims 1 to 6, wherein the step of outputting the output object according to the target output parameter comprises:调用参数调节接口将输出设备的输出参数调至所述目标输出参数,以供所述输出设备中以所述目标输出参数输出所述输出对象。The parameter adjustment interface is called to adjust the output parameter of the output device to the target output parameter, so that the output object can be output from the output device with the target output parameter.8.一种输出效果调节装置,其特征在于,所述装置包括:8. An output effect adjustment device, wherein the device comprises:第一获取模块,用于获取输出对象对应的输出效果标签;The first acquisition module is used to acquire the output effect label corresponding to the output object;第二获取模块,用于获取与所述输出效果标签匹配的目标输出参数;The second acquisition module is used to acquire the target output parameter matched with the output effect label;输出模块,用于按照所述目标输出参数输出所述输出对象。The output module is configured to output the output object according to the target output parameter.9.一种输出效果调节设备,其特征在于,所述输出效果调节设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的输出效果调节程序,所述输出效果调节程序被所述处理器执行时实现如权利要求1至7中任一项所述的输出效果调节方法的步骤。9. An output effect adjustment device, characterized in that the output effect adjustment device comprises: a memory, a processor, and an output effect adjustment program stored on the memory and executable on the processor, the output The effect adjustment program implements the steps of the output effect adjustment method according to any one of claims 1 to 7 when executed by the processor.10.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有输出效果调节程序,所述输出效果调节程序被处理器执行时实现如权利要求1至7中任一项所述的输出效果调节方法的步骤。10. A computer-readable storage medium, characterized in that, an output effect adjustment program is stored on the computer-readable storage medium, and when the output effect adjustment program is executed by a processor, any one of claims 1 to 7 is implemented. The steps of the output effect adjustment method described in item.
CN202011109704.4A2020-10-162020-10-16 Output effect adjustment method, apparatus, device, and computer-readable storage mediumPendingCN112235633A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011109704.4ACN112235633A (en)2020-10-162020-10-16 Output effect adjustment method, apparatus, device, and computer-readable storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011109704.4ACN112235633A (en)2020-10-162020-10-16 Output effect adjustment method, apparatus, device, and computer-readable storage medium

Publications (1)

Publication NumberPublication Date
CN112235633Atrue CN112235633A (en)2021-01-15

Family

ID=74118396

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011109704.4APendingCN112235633A (en)2020-10-162020-10-16 Output effect adjustment method, apparatus, device, and computer-readable storage medium

Country Status (1)

CountryLink
CN (1)CN112235633A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114339301A (en)*2021-12-032022-04-12深圳感臻智能股份有限公司 A dynamic sound effect switching method and system based on different scenarios

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103731722A (en)*2013-11-272014-04-16乐视致新电子科技(天津)有限公司Method and device for adjusting sound effect in self-adaption mode
CN106126168A (en)*2016-06-162016-11-16广东欧珀移动通信有限公司A kind of sound effect treatment method and device
CN108462895A (en)*2017-02-212018-08-28阿里巴巴集团控股有限公司Sound effect treatment method, device and machine readable media
CN111131889A (en)*2019-12-312020-05-08深圳创维-Rgb电子有限公司 Method, system and readable storage medium for scene adaptive adjustment of image and sound

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103731722A (en)*2013-11-272014-04-16乐视致新电子科技(天津)有限公司Method and device for adjusting sound effect in self-adaption mode
CN106126168A (en)*2016-06-162016-11-16广东欧珀移动通信有限公司A kind of sound effect treatment method and device
CN108462895A (en)*2017-02-212018-08-28阿里巴巴集团控股有限公司Sound effect treatment method, device and machine readable media
CN111131889A (en)*2019-12-312020-05-08深圳创维-Rgb电子有限公司 Method, system and readable storage medium for scene adaptive adjustment of image and sound

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114339301A (en)*2021-12-032022-04-12深圳感臻智能股份有限公司 A dynamic sound effect switching method and system based on different scenarios
CN114339301B (en)*2021-12-032024-11-29深圳感臻智能股份有限公司Dynamic sound effect switching method and system based on different scenes

Similar Documents

PublicationPublication DateTitle
CN110933490B (en) A kind of automatic adjustment method of picture quality and sound quality, smart TV and storage medium
US20160337290A1 (en)Message Push Method and Apparatus
CN106921873A (en)Live-broadcast control method and device
US20150039993A1 (en)Display device and display method
WO2007082442A1 (en)An electronic program guide interface customizing method, server, set top box and system
US20150227496A1 (en)Method and system for microblog resource sharing
JP2002342218A (en) Content providing method and system
JP7261732B2 (en) Method and apparatus for determining character color
CN105979353A (en)Method and device for controlling video playing of play device
CN108712674A (en)Video playing control method, playback equipment and storage medium
KR102745297B1 (en) Image processing method and device, and content sharing method and device
KR20020038525A (en)Method for delivering stored images, recording medium and apparatus for delivering stored images
CN112235633A (en) Output effect adjustment method, apparatus, device, and computer-readable storage medium
CN111131883A (en)Video progress adjusting method, television and storage medium
JP4055944B2 (en) Web content conversion method and system
US20180192121A1 (en)System and methods thereof for displaying video content
CN107566860B (en)Video EPG acquisition and playing method, cloud platform server, television and system
CN113779446A (en) Response method, apparatus, system, device and storage medium for access request
US10674022B2 (en)Server, server system, non-transitory computer-readable recording medium storing computer-readable instructions for server, and method performed by server
CN113014848A (en)Video call method, device and computer storage medium
WO2016035061A1 (en)A system for preloading imagized video clips in a web-page
CN110442806A (en)The method and apparatus of image for identification
CN105912137A (en)Method and device for character input
CN113676756A (en) Image request method, apparatus, device, storage medium and program product
JP6567715B2 (en) Information processing apparatus, information processing method, and program

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20210115


[8]ページ先頭

©2009-2025 Movatter.jp