BACKGROUNDMany wireless audio devices, such as Bluetooth® audio devices, support multiple audio modes. Each audio mode of a wireless audio device is often treated by a host computing device as a separately addressable programming entity in initializing, manipulating and streaming of audio data, and each audio mode is often exposed by the host computing device as a separate sound input or output when displayed as a visual element by the operating system.
However, due to computing resource constraints, each wireless audio device often can operate only one audio mode at a time. Yet, an end user may see multiple visual elements for a single wireless audio device, and may expect the wireless audio device to be able to operate multiple audio modes at the same time. Consequently, the audio device may not behave as expected. Similarly, a programmer may see multiple independently addressable items in a programming API.
SUMMARYThe driving of an audio device that supports two or more audio modes is disclosed. Each supported audio mode is associated with a physical device object and a device identifier. When two or more physical device objects have matching device identifiers, a coupled kernel streaming audio interface that is compatible with the physical device objects is enabled.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a schematic diagram illustrating a computing system for driving a wireless audio device that supports two or more audio modes.
FIG. 2 is a schematic diagram illustrating a bus of the computing device ofFIG. 1.
FIG. 3 is a schematic diagram illustrating an audio driver of the computing device ofFIG. 1.
FIG. 4 is a process flow of a method for selecting a kernel streaming audio interface.
FIG. 5 schematically illustrates a user interface of the computing device ofFIG. 1.
FIG. 6 is a flowchart of a method for driving an audio device that supports two or more audio modes.
FIG. 7 is a flowchart of an add-audio-device routine that may be implemented for adding an audio device.
FIG. 8 is a flowchart of a start-audio-device routine that may be implemented for starting an audio device.
FIG. 9 is a flowchart of an on-audio-output-open routine that may be implemented for selecting the audio mode to use by a coupled HFP-A2DP kernel streaming audio interface, upon initiating audio output streaming to the wireless audio device.
FIG. 10 is a flowchart of an on-audio-input-open routine that may be implemented for selecting the audio mode to use by a coupled HFP-A2DP kernel streaming audio interface, upon initiating audio input streaming from the wireless audio device.
FIG. 11 is a flowchart of on-input-close routine that may be implemented for selecting the audio mode to use by a coupled HFP-A2DP kernel streaming audio interface, upon closing of audio input streaming from the wireless audio device.
DETAILED DESCRIPTIONThe driving of a wireless audio device that supports two or more audio modes is disclosed. While the driving of a wireless Bluetooth® audio device that supports a Hands-Free Profile (HFP) and an Advanced Audio Distribution Profile (A2DP) is used as an example, it should be understood that other wireless devices that support other audio modes can use the herein described driving process. This disclosure is applicable to the driving of virtually any wireless audio device that is capable of supporting two or more audio modes.
As described in more detail below, the disclosed driving process provides a mechanism to expose a multi-mode wireless audio device, for example in an operating system, as a single coherent audio device, hiding the details and resource constraints of the individual audio modes supported by the multi-mode audio device. For example, the audio modes of a single wireless audio device are treated as a single programmatically addressable item and represented as a single visual element, such as an icon or a list item, rather than a separate element for each audio mode.
FIG. 1 shows anexample computing device10 that is configured to drive audio devices, such as wireless audio device A and wireless audio device B, which may support one or more audio modes, such as audio mode X and audio mode Y. The computing device may support streaming of audio data between the computing device and the audio devices. The computing device may includecomputer memory18 includinginstructions20, that when executed by alogic subsystem22, cause thelogic subsystem22 to perform various processes and routines disclosed herein. The computing device may additionally include abus24 and may implement various kernel streaming interfaces, such as kernelstreaming audio interface26. As used herein, the phrase “kernel streaming interface” is used to refer to the lowest level hardware-independent I/o interface, and such an interface may take a variety of different forms in various different architectures. In one example, the kernelstreaming audio interface26 may be implemented by anaudio driver28 of the computing device.
Logic subsystem22 may be configured to execute one or more instructions. For example, thelogic subsystem22 may be configured to execute one or more instructions that are part of one or more programs, routines, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement an abstract data type, or otherwise arrive at a desired result. The logic subsystem may include one or more processors that are configured to execute software instructions. Additionally or alternatively, thelogic subsystem22 may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. Thelogic subsystem22 may optionally include individual components that are distributed throughout two or more devices, which may be remotely located in some embodiments.
Memory18 may be a device configured to hold instructions that, when executed by the logic subsystem, cause thelogic subsystem22 to implement the herein described methods and processes.Memory18 may include volatile portions and/or nonvolatile portions. In some embodiments,memory18 may include two or more different devices that may cooperate with one another to hold instructions for execution by the logic subsystem. In some embodiments,logic subsystem22 andmemory18 may be integrated into one or more common devices.
The computing device may further include awireless communication subsystem30 for wirelessly communicating with the audio devices, and adisplay32 having auser interface34 configured to characterize the kernel streaming audio interface, even when the kernel streaming audio interface is a coupled kernel streaming audio interface, by a single audio input end point and a single audio output end point. In other words, the coupled kernel streaming interface is characterized by a single audio input end point and a single audio output end point. As used herein, the phrase “end point” is used to refer to an independently identifiable software object representing an audio input or output. The phrase “end point” should not be construed as being limited to any particular implementation (e.g., the implementation of a particular operating system). For example, theuser interface34 may display singlevisual element36A characterizing an audio device as having a single audio input end point, and a singlevisual element36B characterizing the same audio device as having a single audio output end point.
Each of the audio devices hosted by the computing device may be identified by adevice identifier15. The audio device may for example be a Bluetooth® audio device identified by a Bluetooth® address. In this example, the computing device is shown hosting wireless audio device A identified by the device identifier “123”, and wireless audio device B identified by the device identifier “456”. In addition, the wireless audio device A is shown to support two audio modes: audio mode X and audio mode Y, while the wireless audio device B is shown to support a single audio mode: audio mode X. It should be understood that although two wireless audio devices are shown in this example,computing device10 may potentially host any number of audio devices, including both wireless and non-wireless audio devices.
Referring now toFIG. 2, thebus24 of the computing device may be configured to create or enumerate a separate PDO13 (physical device object) for each audio mode of the audio device hosted by the computing device. As used herein, the term “PDO” is used to refer to a software object that represents a particular audio mode. The term “PDO” should not be construed as being limited to any particular implementation (e.g., the implementation of a particular operating system). In this example, thebus24 enumerates PDO1 for the audio mode X of the audio device A, enumerates PDO2 for the audio mode Y of the audio device A, and enumerates PDO3 for the audio mode X of the audio device B.
Referring now toFIG. 3, the computing device may create, for example via theaudio driver28 of the computing device, a FDO11 (functional device object) for eachPDO13 enumerated by thebus24. Theaudio driver28 may additionally maintain, for example via theaudio driver28 of the computing device, a device entry table38. The device entry table38 may relate eachFDO11 created to the PDO13 the FDO11 is created for. The device entry table38 may be additionally configured to identify the audio mode supported by anFDO11 and identify thedevice identifier15 of the audio device associated with theFDO11. The computing device may be configured to use the device entry table to match the audio modes of a single audio device using thedevice identifier15 associated with the audio mode. The computing device may enable a coupled kernel streaming audio interface compatible with both the first physical device object and the second physical device object if a match exists.
In this example, the device entry table38 lists the functional device objects FDO1, FDO2, and FDO3 and the associated physical device objects PDO1, PDO2, and PDO3. The device entry table38 additionally identifies that FDO1 supports audio mode X, FDO2 supports audio mode Y, and FDO3 supports audio mode X. The device entry table38 may match the FDO1 with the FDO2 based on the fact that FDO1 and FDO2 are both associated with the same audio device having a device identifier of “123”. In contrast, FDO1 and FDO2 are not matched with FDO3, since FDO3 is associated with a different audio device having a device identifier of “456”.
Still referring toFIG. 3. Theaudio driver28 may implement various types of kernel audio streaming interfaces. In this example, theaudio driver28 may implement a coupled kernelstreaming audio interface40 that supports both audio mode X and audio mode Y, an uncoupled kernelstreaming audio interface42 that supports audio mode X, and an uncoupled kernelstreaming audio interface44 that supports audio mode Y. Theaudio driver28 may be configured to internally determine which type of kernelstreaming audio interface26 to implement.
FIG. 4 illustrates an example data flow within theaudio driver28 of the computing device for deciding which type of the kernel streaming audio interfaces26 to implement when streaming audio data to and/or from an audio device that supports both the HFP audio profile and the A2DP audio profile. In this example, theaudio driver28 creates an HFP-FDO46 for the HFP audio profile and creates an A2DP-FDO48 for the A2DP audio profile.
The HFP-FDO46 exposes either an uncoupledHFP audio interface50 or a coupled HFP-A2DP audio interface52 depending on whether thedevice identifier15 of the HFP-FDO46 matches with thedevice identifier15 of the A2DP-FDO48. If the device identifiers do not match, the HFP-FDO46 exposes an uncoupled HFP kernelstreaming audio interface50. On the other hand, if the device identifiers match, the HFP-FDO46 exposes a coupled HFP-A2DP kernelstreaming audio interface52.
The A2DP-FDO48 exposes an uncoupled A2DP kernelstreaming audio interface54 or the coupled HFP-A2DP kernelstreaming audio interface52 depending on whether thedevice identifier15 of the HFP-PDO46 matches thedevice identifier15 of the A2DP-FDO48. If the device identifiers match, the A2DP-FDO48 exposes the coupled HFP-A2DP kernelstreaming audio interface52. On the other hand, if the device identifiers do not match, the A2DP-FDO48 exposes the uncoupled A2DP kernelstreaming audio interface54.
FIG. 5 is a schematic diagram illustrating anexample user interface34 configured to display a singlevisual representation36A for all audio capture capable modes and a singlevisual representation36B for all audio playback capable modes of a single audio device that supports multiple audio modes. In other embodiments, both the audio capture capable modes and the audio playback capable modes may be represented by a single visual element.
FIG. 6 is a flowchart of anexample method600 for driving an audio device supporting two or more audio modes. While the flowchart depicts to coupling of two audio modes, it should be understood that three or more audio modes may be coupled in the same or a similar manner. Themethod600 may be implemented by computingdevice10 ofFIG. 1. Themethod600 may include, at602, associating a first physical device object of an audio device with a first device identifier, the first physical device object representing a first audio mode enumerated by a bus enumerator.
At604, the method may further include associating a second physical device object of an audio device with a second device identifier, the second device object representing a second audio mode enumerated by the bus enumerator. In some examples, the first audio mode supports mono audio playback and voice capture, and the second audio mode supports stereo audio playback without voice capture. In one specific example, the first audio mode is an HFP audio profile and the second audio mode is an A2DP audio profile.
At606, the method may include determining whether the first device identifier matches the second device identifier. If the first device identifier matches the second device identifier, the method goes to608, otherwise, the method goes to610.
At608, the method may include enabling a coupled kernel streaming audio interface compatible with both the first physical device object and the second physical device object. The coupled kernel streaming audio interface may implement the first audio mode if the audio device is operating in a first mode, or implement the second audio mode if the audio device is operating in a second mode. Further, the method may include locking a coupled kernel streaming audio interface to an audio mode.
In some examples, the method may further include representing an audio device as a single coherent audio device. For example, the method may include representing all audio capture capable audio modes of the audio device as a single visual representation and representing all audio playback capable audio modes of the audio device as a different single visual representation or the same single visual representation.
At610, the method may include enabling a first uncoupled kernel streaming audio interface compatible with the first physical device object or enabling a second uncoupled kernel streaming audio interface compatible with the second physical device object.
FIGS. 7-11 are example routines that may be implemented as parts of a method (e.g., method600) for driving an audio device that supports multiple audio modes.
Referring now toFIG. 7, this figure is an example add-audio-device routine700 that may be implemented by a computing device as a part of themethod600 for adding an audio device. The routine700 may include, at702, obtaining a device identifier of the PDO enumerated or created by a bus, and at704, creating an FDO corresponding to the PDO.
The routine may further include at706, checking the device entry table for another FDO with the same device identifier, and at708 determining whether such another FDO exists. The routine may further include, at710, disabling an uncoupled kernel streaming audio interface on the other FDO if the other FDO exists, and at712, adding the new FDO to the device entry table.
FIG. 8 is an example start-audio-device routine800 for starting the audio device that supports an HFP audio profile and an A2DP audio profile. The routine800 may include at802, checking the device entry table for another FDO with the same device identifier, and at804 determining whether another FDO exists. If another FDO does not exist, the routine800 proceeds to806. If another FDO exists, the routine800 proceeds to808.
The routine may additionally include, at806, enabling HFP or A2DP uncoupled kernel streaming audio interface. Alternatively, the routine may include, at808, checking the plug-and-play start state of the other FDO, and at810, determining whether the other FDO has been started, and at812 enabling a coupled HFP-A2DP kernel streaming audio interface if the other FDO has been started.
FIG. 9 is a flowchart of an on-audio-output-open routine900 for selecting the audio mode to use when implementing a coupled HFP-A2DP kernel streaming audio interface upon initiating audio output streaming to the wireless audio device. The routine900 may include, at902, determining whether the kernel streaming audio interface is locked in the HFP audio profile. If the HFP is locked in, the routine proceeds to904. If the HFP is not locked in, the routine proceeds to906.
The routine may include, at904, starting audio output of the audio device using the HFP audio profile. Alternatively, the routine may include, at906, determining whether the kernel streaming audio interface of the audio device is locked in the A2DP audio profile. If the A2DP profile is not locked in, the routine proceeds to908, otherwise, the routine proceeds to910.
At908, the routine may include determining whether audio input of the audio device is active. If the answer is yes, the routine proceeds to904 to start audio output of the audio device using the HFP audio profile, via for example an uncoupled kernel streaming audio interface that supports the HFP audio profile. If the answer is no, the routine proceeds to910 to start audio output of the audio device using the A2DP audio profile, via for example an uncoupled kernel streaming audio interface that supports the A2DP audio profile.
FIG. 10 is an example on-audio-input-open routine1000 for selecting the audio mode to use by a coupled HFP-A2DP kernel streaming audio interface upon initiating audio input. The routine1000 may include, at1002, determining whether the coupled kernel streaming audio interface is locked in the A2DP audio profile. If the answer is yes, the routine1000 fails at1012. If the answer is no, the routine proceeds to1004.
At1004, the routine may include determining whether audio output of the coupled HFP-A2DP kernel streaming audio interface is active using the A2DP audio profile. If the answer is yes, the routine may include at1006, stopping audio output of the coupled HFP-A2DP kernel streaming audio interface using the A2DP audio profile. If the answer is no, the routine may proceed to1010.
The routine may further include, at1008, starting audio output of the HFP-A2DP kernel streaming audio interface using the HFP audio profile, and at1010, starting audio input of the HFP-A2DP kernel streaming audio interface using the HFP audio interface.
FIG. 11 is an example on-audio-input-close routine1100 for selecting the audio mode to use upon closing of audio input by a coupled HFP-A2DP kernel streaming audio interface. The routine1100 may include, at1102, stopping audio input using the HFP audio profile, and at1104, determining whether the coupled HFP-A2DP kernel streaming audio interface is locked in the HFP audio profile.
If the coupled HFP-A2DP kernel streaming audio interface is not locked in the HFP audio profile, the routine may include at1106, determining whether audio output is active using the HFP audio profile. If the audio output is active using the HFP audio profile, the routine may include, at1108, stopping audio output through the HFP audio profile, and at1110, starting audio output using the A2DP audio profile.
It will be appreciated that the embodiments described herein may be implemented, for example, via computer-executable instructions or code, such as programs, stored on computer-readable storage media and executed by a computing device. Generally, programs include routines, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types. As used herein, the term “program” may connote a single program or multiple programs acting in concert, and may be used to denote applications, services, or any other type or class of program. Likewise, the terms “computer” and “computing device” as used herein include any device that electronically executes one or more programs, including two or more such devices acting in concert.
It should be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or in some cases omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.