Detailed Description
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The embodiments described in the examples below do not represent all embodiments consistent with the application. Merely exemplary of systems and methods consistent with aspects of the application as set forth in the claims.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that is capable of performing the function associated with that element.
Fig. 2 is a schematic diagram of a usage scenario of a display device according to an embodiment. As shown in fig. 2, the display device 200 is also in data communication with a server 400, and a user can operate the display device 200 through the smart device 300 or the control apparatus 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes at least one of infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, and the display device 200 is controlled by a wireless or wired mode. The user may control the display apparatus 200 by inputting a user instruction through at least one of a key on a remote controller, a voice input, a control panel input, and the like.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
In some embodiments, software steps performed by one step execution body may migrate on demand to be performed on another step execution body in data communication therewith. For example, software steps executed by the server may migrate to be executed on demand on a display device in data communication therewith, and vice versa.
Fig. 3 exemplarily shows a block diagram of a configuration of the control apparatus 100 in accordance with an exemplary embodiment. As shown in fig. 3, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive to the display device 200, and function as an interaction between the user and the display device 200.
Fig. 4 shows a hardware configuration block diagram of the display device 200 in accordance with an exemplary embodiment.
In some embodiments, display apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, memory, a power supply, a user interface.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display apparatus 200 may establish transmission and reception of control signals and data signals with the control device 100 or the server 400 through the communicator 220.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, or the like. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, a user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
In some embodiments, a "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user that enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include at least one of a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
Referring to FIG. 5, in some embodiments, the system is divided into four layers, from top to bottom, an application layer (referred to as an "application layer"), an application framework layer (Appl icat ion Framework layer) (referred to as a "framework layer"), a An Zhuoyun row (Android run) and a system library layer (referred to as a "system runtime layer"), and a kernel layer, respectively.
In some embodiments, at least one application program is running in the application program layer, and these application programs may be a Window (Window) program of an operating system, a system setting program, a clock program, or the like; or may be an application developed by a third party developer. In particular implementations, the application packages in the application layer are not limited to the above examples.
The framework layer provides an application programming interface (applicat ion programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. The application framework layer corresponds to a processing center that decides to let the applications in the application layer act. Through the API interface, the application program can access the resources in the system and acquire the services of the system in the execution.
As shown in fig. 5, the application framework layer in the embodiment of the present application includes a manager (manager), a Content Provider (Content Provider), and the like, where the manager includes at least one of the following modules: an Activity Manager (Activity Manager) is used to interact with all activities that are running in the system; a Location Manager (Location Manager) is used to provide system services or applications with access to system Location services; a Package Manager (Package Manager) for retrieving various information about an application Package currently installed on the device; a notification manager (Notification Manager) for controlling the display and clearing of notification messages; a Window Manager (Window Manager) is used to manage bracketing icons, windows, toolbars, wallpaper, and desktop components on the user interface.
In some embodiments, the activity manager is used to manage the lifecycle of the individual applications as well as the usual navigation rollback functions, such as controlling the exit, opening, fallback, etc. of the applications. The window manager is used for managing all window programs, such as obtaining the size of the display screen, judging whether a status bar exists or not, locking the screen, intercepting the screen, controlling the change of the display window (for example, reducing the display window to display, dithering display, distorting display, etc.), etc.
In some embodiments, the system runtime layer provides support for the upper layer, the framework layer, and when the framework layer is in use, the android operating system runs the C/C++ libraries contained in the system runtime layer to implement the functions to be implemented by the framework layer.
In some embodiments, the kernel layer is a layer between hardware and software. As shown in fig. 5, the kernel layer contains at least one of the following drivers: audio drive, display drive, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (e.g., fingerprint sensor, temperature sensor, pressure sensor, etc.), and power supply drive, etc.
In the process of performing the audio playing function, the display device 200 may decode the played audio source data through multiple audio playing processes to obtain final played audio data, and play the final played audio data through the audio output interface 270, where the final played audio data may be played through a speaker of the display device 200, or may be played through an external sound device. In some embodiments, the display device 200 may decode and play the playback source data to be played through a device hardware play flow. The hardware playing process refers to that the controller 250 of the display device 200 decodes the playing source data to be played by running the audio playing program in the operating system to obtain the final audio data, and plays the final audio data through the audio output interface 270. In some embodiments, the display device 200 may also decode and play the playback source data to be played through a software play flow of the browser. The software playing process refers to that the controller 250 of the display device 200 decodes the source data to be played by running the audio playing program embedded in the browser, obtains the final audio data, and plays the final audio data through the audio output interface 270.
In some embodiments, the audio data may include only audio data, or may be an audio data portion in the video source, and if the audio data is audio data in the video source, the display device 200 needs to first process the video source to extract the audio data from the video source, and then process and play the extracted audio data in the above manner.
In some embodiments, as shown in fig. 4, the display device 200 may be connected to an audio output interface 270 and play audio source data through the audio output interface 270, for example, the display device 200 may be connected to a speaker on its own through the audio output interface 270 and play audio source data through the speaker, or may be connected to the audio output device through an external audio output terminal, such as an HDMI port, a USB interface, or the like. Taking the playback device as an ARC device as an example, as shown in fig. 6, the ARC device 01 has an HDMI ARC output port 02, at this time, the HDMI port of the display device 200 is an HDMI ARC input port 03 having an ARC function, the HDMI ARC output port 02 and the HDMI ARC input port 03 are connected by an HDMI line 04 supporting ARC, and audio data can be transmitted from the display device 200 to the ARC device 01 through the HDMI line 04 supporting ARC and played through the ARC device 01. Since the ARC device 01 has an ARC function, the use of an additional composite audio line or optical line between the display device 200 and the ARC device 01 can be omitted.
In some embodiments, as shown in fig. 4, the display device 200 may be connected to the sound emitting devices through the communicator 220, and play source data through the sound emitting devices, for example, the display device 200 is connected to the sound emitting devices through a WIFI module, a bluetooth module, a wired ethernet module, or the like. Taking the audio device as a bluetooth headset as an example, as shown in fig. 7, the search device function and the found function of the bluetooth headset 001 are turned on, the search device function and the found function of the display device 200 are turned on, after the bluetooth headset 001 and the display device 200 are successfully paired, a bluetooth transmission channel 002 is established, and sound source data can be transmitted to the bluetooth headset 001 from the display device 200 through the bluetooth transmission channel 002 and played through the bluetooth headset 001.
The display device 200 and the sound emitting device are served by a communication protocol, and the sound emitting device can be divided into a designated sound emitting device and a non-designated sound emitting device according to whether the display device 200 and the sound emitting device have the same communication protocol, wherein the designated sound emitting device refers to the sound emitting device having the same communication protocol with the display device 200, and the designated sound emitting device can transmit sound source data with the display device 200 and can be controlled by the display device 200 to set related playing parameters, such as sound effect parameters and the like. The unspecified sound emitting device refers to a sound emitting device that does not have the same communication protocol as the display device 200, and although the unspecified sound emitting device can be recognized by the display device 200 and transmits sound source data to the display device 200, the display device 200 cannot control setting of play parameters of the unspecified sound emitting device. In some embodiments, in order to further secure the transmission security of the sound source data, and the security of the display apparatus 200, the data transmission between the display apparatus 200 and the unspecified sound emitting apparatus may be prohibited.
As can be seen from the above, when the display device 200 plays the source data, there may be multiple audio devices for the user to select, and the user may switch the currently used audio device to the target audio device meeting the requirement by sending the switching command. However, as shown in fig. 1, when the user inputs the switching instruction through the setup option of the sound emitting device, since the setup option of the sound emitting device is still selectable even though there are no other sound emitting devices that can be switched for use, after the user inputs the switching instruction, the display device 200 does not display the next menu (setup menu of the sound emitting device) for a long time, resulting in that the user cannot further select the target sound emitting device, or after the user inputs the switching instruction, the setup menu of the sound emitting device displayed by the display device 200 is in a blank state, i.e., there is no setup option of the sound emitting device available for selection, resulting in that the user cannot further operate, etc., thereby making the user perform a plurality of invalid operations, and reducing the user experience.
In order to solve the above-described problems, the embodiment of the present application provides a sound setting menu as shown in fig. 8, in which, if there is no other sound emitting device currently connected to the display device 200 than the sound emitting device currently used, in comparison with fig. 1, a prompt flag is displayed on the sound emitting device setting option in the sound setting menu shown in fig. 8, taking fig. 8 as an example, the sound emitting device setting option is set to gray (font color is designated dark gray, background color is designated light gray, and no response is selected).
Fig. 9 illustrates a flow chart of a method of displaying a sound setting menu by the display apparatus 200.
In 901 execution, the display device 200 receives a control instruction input by a user for selecting a sound setting option in a user interface.
In execution 902, based on the received control instructions, the currently used sound emitting device is identified, as well as the sound emitting device currently accessing the display device 200. In this embodiment, the currently used audio output device is a first audio output device, and the audio output device currently connected to the display device 200 is a second audio output device.
In the execution 903, a sound setting menu is generated, which includes the sound emitting device setting options, based on the recognition result, wherein if the number of second sound emitting devices is 0 as the recognition result, a prompt identifier is displayed on the sound emitting device setting options, and the sound emitting device setting options shown in fig. 8 can be referred to prompt the user that there are no other sound emitting devices available for switching use other than the first sound emitting device currently used. If the number of the second sound emitting devices is larger than 0 as a result of the identification, sound device setting options are normally displayed in the sound setting menu.
In 904 execution, the display device 200 displays a sound device menu on the user interface. The user can directly and quickly acquire whether the current sound emitting equipment capable of being used in a switching way or not by browsing the sound emitting equipment setting options, namely if the sound emitting equipment setting options are normally displayed, operations such as selection and the like can be performed, the current sound emitting equipment capable of being used in a switching way is indicated, and the user can perform further switching operations; if the prompt identifier is displayed on the setting option of the audio playing device, the fact that the audio playing device which can be switched to be used is not available at present is indicated, and the user does not need to conduct further switching operation.
Schematic operation between the control apparatus 100 and the display device 200 is schematically shown in fig. 10a-10 b. Illustratively, as shown in fig. 10a, the user sends control instructions to the display device 200 through the control apparatus 100, e.g., a remote controller, for example, by operating "up", "down", "left", "right" keys of the remote controller, causing the option key 2 to move on the user interface 1, as shown in fig. 10a, moving the option key 2 to a sound setting option, and by selecting the sound setting option, sending control instructions to the display device 200. The display device 200 displays a sound setting menu on the user interface 1 in response to the control instruction, as shown in fig. 10b, the sound setting menu including a sound emitting device setting option, and may also display other options such as a picture elimination (no screen is displayed, only sound is played) option, or the like, according to design requirements. As shown in fig. 10b, the device name of the currently used sound emitting device (first sound emitting device), e.g. "speaker", is displayed on the sound emitting device setting option, indicating that the currently used sound emitting device is the speaker of the display device 200 itself. The display device 200 identifies the sound emitting device (second sound emitting device) currently connected to the display device 200, and if the second sound emitting device is not identified, i.e. the number of second sound emitting devices is 0, as shown in fig. 10b, a prompt identifier is displayed on the sound emitting device setting option, for example, the sound emitting device setting option is gray, which indicates that there is no sound emitting device currently available for switching use by the user.
In some embodiments, the prompt identifier may be represented by a designated font color for the sound emitting device setting option (e.g., in gray, a designated light gray for the font color for the sound emitting device setting option), a designated background color for the sound emitting device setting option (e.g., in gray, a designated dark gray for the background color for the sound emitting device setting option), a designated text displayed on the sound emitting device setting option (no selectable sound emitting device, no other sound emitting device, etc.), etc., or the prompt identifier may also employ a combination of some or all of the foregoing.
A schematic diagram of the operation between the control apparatus 100 and the display device 200 is schematically shown in fig. 11a-11 b. The operation process illustrated in fig. 11a is the same as that illustrated in fig. 10a, and will not be described here again. Fig. 11b is different from fig. 10b in that if the display device 200 recognizes that the number of second sound emitting devices, i.e., the second sound emitting devices, is greater than 0, sound emitting device setting options are normally displayed on the sound setting menu, i.e., a prompt flag is not displayed on the sound emitting device setting options, and the sound emitting device setting option at this time may be selected, as shown in fig. 11 b.
Based on the above-described procedure, after the display device 200 displays the sound setting menu, the user can continue the switching operation of the sound emitting device as needed. A method flow diagram of presenting a user interface in a display device 200 is schematically shown in fig. 12.
In execution 1201, the display device 200 receives a switch instruction input by the user through the sound emitting device setting option.
In execution 1202, based on the switch instruction, the display device 200 recognizes whether the sound device setting option has a hint identifier thereon.
In the execution 1203, if the audio output device setting option is provided with a prompt identifier, the execution of the switching instruction is prohibited, for example, a next menu of the audio output device setting option, that is, an audio output device setting menu, is not displayed, and typically, the audio output device setting menu includes device options of the first audio output device and the second audio output device.
In 1204 execution, if the setup option of the audio output device does not have a prompt identifier, the switching instruction is executed to switch the first audio output device to the target audio output device. Wherein the target sound emitting device belongs to a second sound emitting device.
In some embodiments, based on the flow shown in fig. 10a-10b, fig. 10c-10g schematically illustrate the operation between the control apparatus 100 and the display device 200. After displaying the sound setting menu shown in fig. 10b, the user further transmits a first switching instruction to the display device 200 through the control apparatus 100, for example, by operating the remote controller to move the option key 2 to the sound emitting device setting option, and by selecting the sound emitting device setting option, transmits the first switching instruction to the display device 200, as shown in fig. 10 c. In response to the first switching instruction, the display device 200 recognizes that the audio device setting option is provided with a prompt identifier, and at this time, the display device 200 does not display the audio device setting menu and still displays the current interface. In some embodiments, in order to prompt the user and correct the misoperation of the user in time, as shown in fig. 10d, the display device 200 may generate and display prompt information on the current interface, where the prompt information may be text information, for example, "no other access device currently" shown in fig. 10d, or voice information.
In some embodiments, after displaying the prompt information, the display device 200 may also continue to respond to the first switching instruction, as shown in fig. 10e, to display an audio device setting menu, where the audio device setting menu includes only the device name of the first audio device, for example, a "speaker". As shown in fig. 10f, the user may send a second switching instruction to the display device 200 by the control apparatus 100, for example by operating the remote control to move the option key 2 to the speaker option to continue to use the first sound emitting device "speaker". As shown in fig. 10g, the display device 200 uses the first sound emitting device "speaker" in response to the second switching instruction, and displays a new sound setting menu as shown in fig. 10g, which is the same as fig. 10 b. In this way, the user only needs to send the second switching instruction once, and can redisplay the sound setting menu to perform other operations, so that the number of operations of the user to return the sound setting menu through the rollback operation (for example, the user sends the return instruction to the display device 200 through the control device 100 for multiple times) can be reduced, so as to reduce the number of interactions between the user and the display device 200, and correspondingly, reduce the processing procedure of the display device 200 responding to multiple return instructions.
In some embodiments, as shown in fig. 13, the audio device setting menu includes not only the device options of the first audio device, but also the device options of the third audio device (such as the bluetooth headset in fig. 13), where the third audio device is a history audio device that is connected to the display device 200 and is not currently connected to the display device 200, and because the third audio device is not currently selectable, it is necessary to display an unselected identifier, such as a gray (as shown in fig. 13), a special symbol, a designated font color, a designated background color, and so on, on the device options of the third audio device. At this time, the user can learn which audio devices have been used by browsing the device options of the third audio device, so as to provide the user with access references for the audio devices, and enable the user to quickly select the appropriate audio device to access the display device 200.
In some embodiments, based on the flow shown in fig. 11a-11b, fig. 11c-11f schematically illustrate the operation between the control apparatus 100 and the display device 200. After displaying the sound setting menu shown in fig. 11b, the user further transmits a first switching instruction to the display apparatus 200 through the control device 100 as shown in fig. 11c, and this specific process may refer to the process shown in fig. 10c, which is not described here. The display device 200 recognizes that the audio device setting option does not have a prompt identifier in response to the first switching instruction, and the display device 200 generates an audio device setting menu in response to the first switching instruction, as shown in fig. 11d, where the audio device setting menu includes device names of a first audio device and a second audio device, for example, the first audio device is a "speaker", and the first audio device is a prompt identifier by a dashed box to prompt a user that the audio device is a currently used audio device, and other prompt identifiers may be used to distinguish the first audio device from the second audio device, for example, the device option of the first audio device uses a specified font color, a specified background color, has a specified symbol, and the second audio device includes an ARC device, as shown in fig. 11 d. As shown in fig. 11e, the user may send a second switching instruction to the display device 200 by controlling the apparatus 100, for example, by operating the remote controller to move the option key 2 to the device option of the target playback device, which in this embodiment is the ARC device, and then as shown in fig. 11e, move the option key 2 to the device option of the ARC device, and send the second switching instruction to the display device 200 by selecting the device option. As shown in fig. 11f, the display device 200 switches the first sound emitting device "speaker" to the target sound emitting device "ARC device" in response to the second switching instruction, and displays a new sound setting menu as shown in fig. 11f, in which the device name "ARC device" of the sound emitting device currently used is displayed on the sound emitting device setting option, while the sound emitting device setting option is still in a normal state, that is, the prompt identification is not displayed, selection can be made, and the user can still switch the "ARC device" to the "speaker" by performing a similar procedure as in fig. 11c to 11 f.
In some embodiments, as shown in fig. 14, the audio device setting menu includes not only the device options of the first audio device and the second audio device, but also the device option of the third audio device, and the non-selectable identifier is displayed on the device option of the third audio device, where the non-selectable identifier may refer to the description in fig. 12 and is not repeated herein. The user can still widen the choices of later access and use of the sound emitting device by browsing the device options of the third sound emitting device.
Fig. 15 illustrates a flow diagram of one method of presenting a user interface in a display device 200.
In 1501, the display device 200 recognizes device types of the first and second sound emitting devices in response to a control instruction.
In execution 1502, if the device type of at least one of the first and second sound emitting devices is a specified sound emitting device, the display device 200 generates a sound setting menu, wherein the sound setting menu includes not only sound emitting device setting options but also sound effect setting options. The user can quickly acquire the sound emitting equipment capable of setting the sound effect parameters in the first sound emitting equipment and the second sound emitting equipment through the sound effect setting options displayed in the sound setting menu.
In 1503 execution, if the device types of the first and second sound emitting devices are both non-designated sound emitting devices, the display device 200 generates a sound setting menu including only sound emitting device setting options and not including sound effect setting options. The user can quickly acquire that no sound emitting device capable of setting the sound effect parameters is arranged in the first sound emitting device and the second sound emitting device through the sound setting menu without displaying the sound effect setting options.
Based on the process shown in fig. 10a-10g, in some embodiments, the display device 200 responds to the control instruction, since the second sound emitting device is not present, only the device type of the first sound emitting device needs to be identified at this time, for example, the first sound emitting device is a speaker, the display device 200 identifies whether the speaker is a designated sound emitting device, for example, by identifying an interface protocol of an audio interface where the speaker is connected to the display device 200, so as to determine whether the interface protocol specifies a sound effect parameter that allows the speaker to be adjusted by the display device 200, when the sound effect parameter that allows the speaker to be adjusted by the display device 200 is specified, the speaker is a designated sound emitting device, otherwise, the speaker is a non-designated sound emitting device, and the sound setting menu shown in fig. 16 takes the speaker as the designated sound emitting device, at this time, the sound effect setting option is displayed on the sound setting menu to prompt that the user has a sound emitting device that can adjust the sound effect parameter, and for this embodiment, since the sound emitting device currently connected to the display device 200 has only the first sound emitting device. If the display device 200 recognizes that the device type of the first sound emitting device is a non-designated sound emitting device, there is no need to display a sound effect setting option on the sound setting menu, at this time, the sound setting menu as shown in fig. 10b may be displayed to prompt the user that there is currently no sound emitting device that can adjust the sound effect parameter, since the sound emitting device currently connected to the display device 200 has only the first sound emitting device, that is, the first sound emitting device cannot adjust the sound effect parameter. In some embodiments, the sound effect setting option in the sound setting menu is a default option, that is, the sound effect setting option is displayed by default in the sound setting menu, for example, the sound setting menu shown in fig. 16, only when there is no sound emitting device capable of adjusting the sound effect parameter currently, the sound effect setting option is subjected to hiding processing before the sound setting menu is displayed, for example, a layer is overlaid on the sound effect setting option, the color of the layer is the same as the background color of the sound setting menu (as shown in fig. 17, the sound effect setting option overlaid by the layer is shown by a dotted line), so as to realize the hiding effect, and the sound setting menu actually seen by the user can refer to fig. 10b.
Based on the process shown in fig. 11a-11f, in some embodiments, the display device 200 identifies device types of the first and second sound emitting devices, for example, the first sound emitting device is a speaker and the second sound emitting device is an ARC device, in response to a control instruction sent by a user, and generates a sound setting menu according to the identification result. In some embodiments, the speaker and the ARC device are both designated audio devices, or the first audio device is a speaker, and the second audio device is a bluetooth headset, where the speaker is a designated audio device, and the bluetooth headset is a non-designated audio device, which indicates that there is an audio device capable of adjusting audio parameters in the audio device currently connected to the display device 200, then the display device 200 displays a sound setting menu as shown in fig. 18, where the sound setting menu includes audio device setting options and audio effect setting options, and the audio effect setting options may prompt that there is an audio device capable of adjusting audio parameters in the first audio device and the second audio device. In some embodiments, where the speaker and ARC device are both non-designated sound emitting devices, the display device 200 may display a sound setup menu as shown in fig. 11b, i.e., no sound effect setup menu is displayed in the sound setup menu.
Whether the sound effect setting options exist on the sound setting menu or not can prompt the user whether sound output equipment capable of adjusting sound effect parameters exists currently. Further, when the sound setting options are displayed on the sound setting menu, the displayed sound setting options are specifically used for adjusting the sound parameters of the currently used sound outputting device, that is, the sound setting options are selected to enter the sound parameter setting menu, and the sound parameters displayed in the sound parameter setting menu correspond to the currently used sound outputting settings. For example, when the display device 200 receives the control instruction, it is recognized that the first audio device speaker and the second audio device ARC device are both designated audio devices, as shown in fig. 18, the audio setting options are displayed in the audio setting menu, and further, the display device 200 recognizes whether the first audio device, i.e., the speaker, is the designated audio device. In some embodiments, when the first sound emitting device is a designated sound emitting device, as shown in fig. 18, normal sound effect setting options are displayed in the sound setting menu, that is, the sound effect setting options may be selected, and a corresponding sound effect parameter setting menu may be entered to adjust the sound effect parameters of the first sound emitting device. For example, the user transmits a first parameter tuning instruction to the display apparatus 200 through the control device 100, the display apparatus 200 displays an audio parameter setting menu as shown in fig. 19 in response to the first parameter tuning instruction, the audio parameter setting menu including a plurality of audio parameters such as a Sound mode, surround (Surround), sound reset (Sound Remaster), bass emphasis (Bass emphasis), equalizer (Equalizer), and the like, and the user transmits a second parameter tuning instruction to the display apparatus 200 through the control device 100, the display apparatus 200 responding to the second parameter tuning instruction to set each of the audio parameters as a target audio parameter.
In some embodiments, when the first sound emitting device is a non-designated sound emitting device, an unavailable identifier is displayed on the sound effect setting option, for example, as shown in fig. 20, the sound effect setting option is grayed out, or the sound effect setting option adopts a designated font color, the sound effect setting option adopts a designated background color, a special identifier is added on the sound effect setting option, and the like, so as to prompt the user that the sound effect parameters of the first sound emitting device cannot be adjusted. Further, if the user sends the first parameter adjustment instruction through the sound effect setting option by mistake, the display device 200 may display a prompt message, for example, "the sound effect parameter cannot be adjusted currently" to prompt the user that the sound effect parameter of the first sound emitting device cannot be adjusted.
The process of switching the first sound emitting device to the target sound emitting device by the display device 200 in response to the switching instruction further includes: the display device 200 recognizes whether the target sound emitting device (ARC device) is a designated sound emitting device. In some embodiments, when the target sound emitting device is a designated sound emitting device, a sound effect setting option without an unavailable identifier is displayed in a sound setting menu, and reference may be made to fig. 18, wherein the sound emitting device setting option displays a device name of the target sound emitting device, and the sound effect setting option may be selected, and the sound effect setting option is specifically used to adjust a sound effect parameter of the target sound emitting device. The process for displaying the sound effect setting options and the process for adjusting the sound effect parameters of the target sound emitting device may refer to the process corresponding to fig. 19, which is not described herein.
In some embodiments, when the target sound emitting device is a non-designated sound emitting device, an unavailable flag is displayed on the sound effect setting option, indicating that the sound effect parameter of the target sound emitting device is not adjustable, reference may be made to fig. 20, where the device name of the target sound emitting device is displayed on the sound emitting device setting option.
According to the technical scheme, when the user sets the sound emitting device, the display device and the setting method of the sound emitting device provided by the embodiment can display the prompt identifier on the sound emitting device setting option in the sound setting menu to prompt the user that the sound emitting device which can be switched to be used is not currently available, display the sound effect setting option in the sound setting menu to prompt the user that the sound emitting device which can be used for adjusting the sound effect parameter is available in the sound emitting device which is currently accessed to the user, and display the unavailable identifier on the sound effect setting option to prompt the user that the sound effect parameter of the sound emitting device which is currently used cannot be adjusted. Therefore, the user can quickly determine the further operation of setting the sound equipment by browsing the sound setting menu, so that the user is prevented from generating unnecessary operation, and the use experience of the user is improved.
The above-provided detailed description is merely a few examples under the general inventive concept and does not limit the scope of the present application. Any other embodiments which are extended according to the solution of the application without inventive effort fall within the scope of protection of the application for a person skilled in the art.