Disclosure of Invention
In order to solve the above problems, embodiments of the present application provide a method for cross-device audio data transmission and an electronic device. The earphone connected with the mobile phone does not need to be disconnected with the mobile phone, and the earphone can receive audio data sent by the intelligent screen and play the audio as long as the mobile phone is connected with the intelligent screen, so that the time delay problem and the complicated switching program caused by crisis switching connection can be avoided, and the user experience is improved.
In a first aspect, a method for cross-device audio data transmission is provided, applied to a first electronic device, and includes:
establishing a first connection with a second electronic device and establishing a second connection with a third electronic device, wherein the first electronic device and the second electronic device both comprise a display screen, and the third electronic device comprises a loudspeaker;
receiving audio data sent by the second electronic equipment, wherein the audio data is audio data corresponding to audio played by the second electronic equipment,
receiving graphical user interface data sent by the second electronic equipment, wherein the graphical user interface data is data corresponding to a graphical user interface displayed when the second electronic equipment plays audio;
transmitting the audio data to the third electronic device, so that the third electronic device converts the audio data into audio output;
and converting the graphical user interface data into a graphical user interface to be displayed on a display screen of the local machine.
The second audio played by the second electronic device is transmitted to the third electronic device of the first electronic device through the first electronic device to be played, and delay caused by switching connection and re-pairing of the third electronic device can be avoided. The user does not need to be paired with the second electronic equipment, only needs the first electronic equipment to be connected with the third electronic equipment, and the user can receive the audio played by the second electronic equipment through the third electronic equipment, so that user experience is improved.
For example, as shown in fig. 4B, the smart screen receives a user call, the audio played by the smart screen is an incoming call bell, and meanwhile, thegraphical user interface 303 is displayed, thegraphical user interface 303 includes incoming call information (for example, an incoming call task head) and controls (an answer control and a reject control), the smart screen can send data corresponding to the graphical user interface to the mobile phone, the mobile phone displays the graphical user interface, and the user can decide whether to answer the call or reject the call on the mobile phone, so that interaction with an application on the smart screen is facilitated, and the user experience can be improved without clicking the controls in front of the smart screen.
For example, in a scenario where the first electronic device and the second electronic device are both mobile phones, for example, the first electronic device is a mobile phone of the user a, the second electronic device is a mobile phone of the user B, and the user B may transmit a movie picture played by the user B to the mobile phone of the user a to play, and play audio of a movie played by the user B through a speaker connected to the mobile phone of the user a, so that sharing of audio and video of the movie may be achieved.
With reference to the first aspect, in certain possible implementations of the first aspect, the graphical user interface includes a video screen.
For example, the second electronic device is playing video, wherein the displayed video frame is a graphical user interface.
With reference to the first aspect, in certain possible implementations of the first aspect, the first connection includes a wired connection.
With reference to the first aspect, in certain possible implementations of the first aspect, the first connection includes at least one of WiFi (Wireless Fidelity ), bluetooth, wiFi direct, NFC (Near Field Communication ).
For example, the first electronic device may be connected to the second electronic device via bluetooth and WIFI.
With reference to the first aspect, in some possible implementation manners of the first aspect, the first connection includes connecting to a wireless network sent by the same network device.
The first electronic device and the second electronic device are connected to the same wireless network, and as a plurality of other electronic devices are connected to the same wireless network, the second electronic device transmits the audio data to the first electronic device and simultaneously transmits the graphic user interface data or the video picture data to the first electronic device, so that a user can operate on the computer, interaction with application in the second electronic device is realized, and the user can be prevented from searching the electronic devices for playing the audio in the plurality of electronic devices. For example, the user returns home with the earphone, the earphone is connected with the mobile phone, the mobile phone is connected with a smart screen, a tablet computer and a personal computer in the home, the mobile phone can send the incoming call audio data and the incoming call picture data to the mobile phone when the smart screen is in a call, the mobile phone sends the audio data to the earphone, the user can hear the incoming call bell through the earphone, and the user can decide whether to answer the incoming call through the mobile phone, so that the user is prevented from searching for the incoming call equipment in a plurality of electronic equipment, and the user experience is improved.
With reference to the first aspect, in some possible implementations of the first aspect, the first connection includes a connection using UWB (Ultra-wideband) technology.
With reference to the first aspect, in certain possible implementation manners of the first aspect, the second connection includes at least one of a wireless connection and a wired connection.
With reference to the first aspect, in certain possible implementations of the first aspect, the second connection includes a bluetooth connection.
With reference to the first aspect, in some possible implementation manners of the first aspect, before receiving audio data sent by the second electronic device, the connection between the local device and the second electronic device is detected, and connection information of a third electronic device is sent to the second electronic device, where the third electronic device establishes a second connection with the local device.
Optionally, the connection information may include a MAC (Media Access Control ) address of the third electronic device.
The first electronic device detects that connection is established with the second electronic device, and actively transmits connection information of the third electronic device connected with the local device to the second electronic device, so that a user can be prevented from manually broadcasting and searching the third electronic device capable of sounding across devices through the second electronic device, user experience is improved, and real-time perception is achieved.
With reference to the first aspect, in some possible implementation manners of the first aspect, before receiving audio data sent by the second electronic device, the connection between the second electronic device and the third electronic device is detected, and connection information of the third electronic device is sent to the second electronic device, where the second electronic device establishes a first connection with the second electronic device.
Optionally, the connection information includes a MAC address of the third electronic device.
The first electronic device detects that connection is established with the third electronic device, connection information of the second electronic device connected with the local device is actively sent to the second electronic device, the third electronic device capable of sounding across devices can be prevented from being manually broadcast and searched by a user through the second electronic device, user experience is improved, and real-time perception is achieved.
In combination with the first aspect, before the first electronic device receives the audio data sent by the second electronic device, a query request sent by the second electronic device is received, and connection information of the third electronic device is sent to the intelligent screen, wherein the query request is used for requesting the connection information of the third electronic device, and the third electronic device establishes a second connection with the local device.
After the second electronic device detects that the second electronic device is connected with the first electronic device, the third electronic device capable of crossing the devices is actively inquired to the first electronic device, real-time sensing is achieved, the tedious process that a user manually searches the third electronic device capable of crossing the devices in the second electronic device can be avoided, and user experience is improved.
With reference to the first aspect, in some possible implementation manners of the first aspect, the local device further establishes a third connection with the fourth electronic device, displays a graphical user interface locally, receives an instruction input by a user, determines an audio data source device, where the audio data source device includes the second electronic device and the fourth electronic device, receives audio data sent by the audio data source device, and sends the audio data to the third electronic device.
By setting the graphical user interface on the first electronic device, the user selects the audio data source device on the graphical user interface, so that the user can grasp the initiative and select the audio data which is wanted to be received, and the conflict caused by the fact that the two electronic devices simultaneously send the audio data can be avoided.
With reference to the first aspect, in some possible implementation manners of the first aspect, the local device further establishes a third connection with the fourth electronic device, detects that the second electronic device and the fourth electronic device play audio simultaneously, determines an audio service type corresponding to the audio played by the second electronic device and the fourth electronic device respectively, and determines an audio data source device according to the priority of the audio service type, where the audio data source device includes the second electronic device and the fourth electronic device.
For example, when the smart screen is playing music, the mobile phone receives audio data of the music service sent by the smart screen, and the mobile phone transmits the audio data to the Bluetooth headset. The mobile phone detects that the tablet personal computer has an incoming call service (or the tablet personal computer notifies the mobile phone when the tablet personal computer has the incoming call service), the mobile phone compares the music service of the intelligent screen with the priority of the incoming call service of the tablet personal computer, determines that the priority of the incoming call service is higher than that of the music service, stops receiving the audio data of the intelligent screen, receives the audio data corresponding to the incoming call service of the tablet personal computer, sends the audio data of the tablet personal computer to the Bluetooth headset, and a user can talk through the Bluetooth headset.
The audio data source equipment to be received is determined according to the priority of the audio service type, the first electronic equipment preferentially provides the audio service with high priority for the user, so that the user can be prevented from missing important services, and the user experience is improved.
Optionally, the audio service type may include at least one of a call service, a video service, a music service, and a notification service.
Optionally, the call service has a higher priority than the music service and the video service.
Optionally, the notification service has a higher priority than the music service.
Optionally, the notification service may include a notification service generated by an alarm clock or a notification service generated by a reminder.
When the user receives the music played on the intelligent screen through the earphone, the tablet personal computer has reminding items, or the alarm clock on the tablet personal computer is started, the mobile phone stops transmitting the data of the music service of the intelligent screen to the earphone, and the reminding service on the tablet personal computer and the audio data corresponding to the alarm clock service are preferentially transmitted to the earphone so as to remind the user, and the user is prevented from being immersed in the music and forgetting backlog items.
Optionally, after the data source device is switched, the first electronic device displays a notification message, where the notification message is used to notify the user of the current data source device.
Optionally, after the data source device is switched, the first electronic device displays a notification message, where the notification message includes the first control, and the first electronic device cancels the data source device switching when detecting the instruction input by the user.
With reference to the first aspect, in some possible implementation manners of the first aspect, the local device further establishes a third connection with the fourth electronic device, detects that the second electronic device and the fourth electronic device play audio simultaneously, and determines an audio data source device to be received according to priorities of the second electronic device and the fourth electronic device, where the audio data source device includes the second electronic device and the fourth electronic device.
By judging the priorities of different electronic devices, the sounding requirements of the electronic devices with high priorities can be preferentially met, manual switching of users is avoided, and user experience is improved.
With reference to the first aspect, in some possible implementation manners of the first aspect, the first electronic device periodically counts frequencies of connection between the local device and the second electronic device and the fourth electronic device respectively, and ranks priorities according to the frequencies.
For example, the firstelectronic device 100 counts the connection frequencies of the local and the secondelectronic devices 102 and the fourthelectronic device 104 in a week (or a month), finds that the frequency of the local connection with the secondelectronic device 101 is higher, and can determine that the user is more required to interact with the secondelectronic device 101, and thus determine that the priority of the secondelectronic device 101 is higher, and set the secondelectronic device 101 as an audio data source device.
The priority of different electronic devices is updated by periodically counting the frequency of connection with the different electronic devices, so that the preference of a user can be matched, the priority is prevented from being manually adjusted by the user, and the user experience is improved.
In a second aspect, a method for cross-device audio data transmission is provided, applied to a second electronic device, and includes:
Establishing a first connection with a first electronic device;
the method comprises the steps that the fact that the local machine plays audio is detected, audio data corresponding to the audio are sent to first electronic equipment, so that the first electronic equipment sends the audio data to third electronic equipment, and second connection is established between the third electronic equipment and the first electronic equipment;
and sending the graphical user interface data or the video picture data to the first electronic equipment so as to enable the first electronic equipment to display the graphical user interface, wherein the graphical user interface data is data corresponding to the graphical user interface displayed when the audio is played by the local machine.
With reference to the second aspect, in certain possible implementation manners of the first aspect, before sending audio data to the first electronic device, a query request is sent to the first electronic device, where the query request is used to request connection information of a third electronic device connected to the first electronic device.
With reference to the second aspect, in some possible implementations of the first aspect, the user sets, at a setting interface of the second electronic device, whether to share a screen of the local device while transmitting the cross-device audio data.
With reference to the second aspect, in some possible implementations of the first aspect, the second electronic device decides whether to transmit the graphical user interface data according to a service type of the audio that is played locally.
For example, when the local audio is generated by an incoming call service, such service generally requires a user to perform an interactive operation, the second electronic device may transmit the graphical user interface data to the first electronic device, so as to facilitate interaction with the user and improve user experience; when the local audio is generated by a music service (e.g., the second electronic device is playing music through a music application or listening to a program through a radio), such a service generally does not require interactive operation by the user, the second electronic device may not transmit graphical user interface data to the first electronic device to save power consumption.
With reference to the second aspect, in some possible implementation manners of the first aspect, after the second electronic device transmits the graphical user interface displayed locally to the first electronic device for playing, the second electronic device turns off the screen, so that power consumption can be saved.
When the graphic user interface displayed by the local machine is transmitted, the local machine can be turned off because the attention point of the user is on the first electronic equipment, and thus, the power consumption can be saved.
With reference to the second aspect, in some possible implementation manners of the first aspect, after the second electronic device transmits the graphical user interface displayed locally to the first electronic device for display, the second electronic device detects that the electric quantity of the local device is lower than the first threshold value, and stops sharing the graphical user interface or the video picture displayed locally.
And the power consumption can be saved by transmitting the graphical user interface displayed by the local machine and detecting the electric quantity of the local machine at the same time and automatically stopping transmitting the graphical user interface when the electric quantity is low.
In a third aspect, an electronic device is provided, comprising: a memory, one or more processors, the memory comprising instructions that, when executed by the one or more processors, cause the electronic device to perform the cross-device audio data transmission method of the first aspect and possible implementations of the first aspect.
In a fourth aspect, there is provided an electronic device comprising: a display screen, a memory, one or more processors, the memory comprising instructions that, when executed by the one or more processors, cause the electronic device to perform the method of cross-device audio data transmission in the second aspect and possible implementations of the second aspect.
In a fifth aspect, there is provided an apparatus for cross-device audio data transmission, the apparatus being comprised in an electronic device, the apparatus having functionality to implement the first electronic device behaviour of the first aspect and possible implementations of the first aspect. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the functions described above.
In a sixth aspect, there is provided an apparatus for cross-device audio data transmission, the apparatus being embodied in an electronic device, the apparatus having functionality to implement the first electronic device behaviour of the second aspect and possible implementations of the second aspect. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the functions described above.
In a seventh aspect, there is provided a computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of cross-device audio data transmission in any of the possible implementations of any of the above aspects.
In an eighth aspect, there is provided a computer program product for, when run on an electronic device, causing the electronic device to perform the method of cross-device audio data transmission in any of the possible designs of any of the above aspects.
A ninth aspect provides a system comprising the first aspect and first, second and third electronic devices thereof, the first electronic device establishing a first connection with the second electronic device, the first electronic device establishing a second connection with the third electronic device, wherein the first and second electronic devices both comprise a display screen, and the third electronic device comprises a speaker;
The first electronic device receives audio data sent by the second electronic device, wherein the audio data is corresponding to the audio being played by the second electronic device,
the first electronic device receives graphical user interface data sent by the second electronic device, wherein the graphical user interface data is data corresponding to a graphical user interface displayed when the second electronic device is playing audio;
the first electronic device sends the audio data to the third electronic device, and the third electronic device converts the received audio data into audio output;
the second electronic device sends the graphical user interface data to the first electronic device, and the first electronic device converts the graphical user interface data into a graphical user interface to be displayed on a display screen of the local electronic device.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Wherein, in the description of the embodiments of the present application, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
The method provided by the embodiment of the application can be applied to the scene that electronic equipment such as mobile phones, tablet computers, wearable equipment, vehicle-mounted equipment, augmented reality (augmented reality, AR)/Virtual Reality (VR) equipment and the like is connected with electronic equipment such as notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal digital assistant, PDA) and the like, and the specific types of the connected electronic equipment are not limited.
By way of example, fig. 1 shows a schematic diagram of anelectronic device 100. Theelectronic device 100 may include aprocessor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, apower management module 141, a battery 142, an antenna 1, an antenna 2, amobile communication module 150, a wireless communication module 160, anaudio module 170, aspeaker 170A, areceiver 170B, amicrophone 170C, anearphone interface 170D, asensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. Thesensor module 180 may include a pressure sensor 180A, agyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, anacceleration sensor 180E, a distance sensor 180F, aproximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambientlight sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on theelectronic device 100. In other embodiments of the present application,electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Theprocessor 110 may include one or more processing units, such as: theprocessor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of theelectronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in theprocessor 110 for storing instructions and data. In some embodiments, the memory in theprocessor 110 is a cache memory. The memory may hold instructions or data that theprocessor 110 has just used or recycled. If theprocessor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of theprocessor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, theprocessor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge theelectronic device 100, and may also be used to transfer data between theelectronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of theelectronic device 100. In other embodiments of the present application, theelectronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of theelectronic device 100. The charging management module 140 may also supply power to the electronic device through thepower management module 141 while charging the battery 142.
Thepower management module 141 is used for connecting the battery 142, and the charge management module 140 and theprocessor 110. Thepower management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to theprocessor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. Thepower management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, thepower management module 141 may also be provided in theprocessor 110. In other embodiments, thepower management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of theelectronic device 100 may be implemented by the antenna 1, the antenna 2, themobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
Theelectronic device 100 may implement audio functions through anaudio module 170, aspeaker 170A, areceiver 170B, amicrophone 170C, anearphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
Theaudio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. Theaudio module 170 may also be used to encode and decode audio signals. In some embodiments, theaudio module 170 may be disposed in theprocessor 110, or a portion of the functional modules of theaudio module 170 may be disposed in theprocessor 110.
Thespeaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. Theelectronic device 100 may listen to music, or to hands-free conversations, through thespeaker 170A.
Areceiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. Whenelectronic device 100 is answering a telephone call or voice message, voice may be received by placingreceiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near themicrophone 170C through the mouth, inputting a sound signal to themicrophone 170C. Theelectronic device 100 may be provided with one ormore microphones 170C. In other embodiments, theelectronic device 100 may be provided with twomicrophones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, theelectronic device 100 may also be provided with three, four, ormore microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
Theearphone interface 170D is used to connect a wired earphone. Theheadset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
Theelectronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering.Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, theelectronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
Theelectronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also perform algorithm optimization on noise and brightness of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments,electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when theelectronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. Theelectronic device 100 may support one or more video codecs. In this way, theelectronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of theelectronic device 100 may be implemented through the NPU, for example: image recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of theelectronic device 100. The external memory card communicates with theprocessor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. Theprocessor 110 executes various functional applications of theelectronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an operating system, application programs (such as a sound playing function, an image playing function, etc.) required for one or more functions, and the like. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a nonvolatile memory such as one or more magnetic disk storage devices, flash memory devices, universal flash memory (universal flash storage, UFS), and the like.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. Theelectronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of theelectronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with theelectronic device 100. Theelectronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. Theelectronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, theelectronic device 100 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in theelectronic device 100 and cannot be separated from theelectronic device 100.
The software system of theelectronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, taking an Android system with a layered architecture as an example, a software structure of theelectronic device 100 is illustrated.
Fig. 2 is a software configuration block diagram of theelectronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively. The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of theelectronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media library (media library), three-dimensional graphics processing library (e.g., openGL ES), 2D graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
In the embodiment of the present application, referring to fig. 2, the system library may further include an image processing library. After the camera application is started, the camera application may acquire an image acquired by the electronic device.
The system shown in fig. 3 includes a firstelectronic device 100, a secondelectronic device 101, and a thirdelectronic device 102, where the firstelectronic device 100 may be a mobile phone, a tablet computer, a smart screen, etc., the secondelectronic device 101 may be a mobile phone, a tablet computer, a smart screen, etc., and the thirdelectronic device 102 may be an electronic device including a speaker, such as a bluetooth headset or a bluetooth speaker.
In an exemplary scenario, the firstelectronic device 100 is a mobile phone, the secondelectronic device 101 is a smart screen, the thirdelectronic device 102 is an earphone, and when the user receives, through theearphone 102, audio data corresponding to a first audio played by themobile phone 100, thesmart screen 101 receives a call request (including a voice call request, a video call request, or an incoming call request, where the voice call request and the video call request may be from social applications such as a WeChat, a LINE, or from Facetime applications, and the incoming call request may be from a call in a telephone application), where the audio generated by the call request is a second audio. The user cannot hear the second audio on thesmart screen 101 through theearphone 102 at this time (for example, the second audio may be a call bell or a call content audio). The user needs to manually disconnect theearphone 102 from themobile phone 100 and then establish the connection between theearphone 102 and thesmart screen 101 so that the second audio can be heard through theearphone 102. If theearphone 102 is first connected to thesmart screen 101, a complicated connection process such as manual pairing code input is also required, which results in a complicated connection process and poor user experience.
The following description will be made schematically using the firstelectronic device 100 as a mobile phone, the second electronic device as asmart screen 101, and the thirdelectronic device 102 as a bluetooth headset.
As shown in fig. 4, the handset shows an illustrativegraphical user interface 301. The illustrativegraphical user interface 301 may be a video playing interface or a music playing interface. The mobile phone transmits audio data (audio in video played by the mobile phone or music audio played by the mobile phone) corresponding to the first audio played by the mobile phone to the Bluetooth headset, and the Bluetooth headset receives the audio data corresponding to the first audio played by the mobile phone and then sounds. It should be understood that, the playing in the embodiments of the present application refers to that the system of the electronic device plays audio, and the electronic device may play audio through a speaker device externally connected to the electronic device, for example, a bluetooth headset and a bluetooth speaker. The electronic equipment can play the audio and can also sound through a loudspeaker connected with the electronic equipment in a local mode. It should be understood that the electronic device may not be connected to a speaker while playing audio, in which case the electronic device does not sound.
The mobile phone is connected with the intelligent screen in a first mode, the mobile phone is connected with the Bluetooth headset in a second mode, the intelligent screen plays an incoming call, the incoming call generates second audio, the second audio can be an incoming call bell or audio in the call, the mobile phone receives audio data of the incoming call sent by the intelligent screen and sends the audio data to the Bluetooth headset, and the Bluetooth headset converts the audio data into second audio to be output through sounding of a loudspeaker. For example, the second audio is a bell sound "dingdong", and the sound emitted by the bluetooth headset is also "dingdong".
The mobile phone establishes a first connection with the smart screen, the mobile phone establishes a second connection with the Bluetooth headset, the Bluetooth headset outputs a first audio played by the mobile phone, the mobile phone detects that a second audio is played on the smart screen (for example, the smart screen can send a message to the mobile phone), the mobile phone sends audio data of the second audio to the Bluetooth headset, and the Bluetooth headset outputs the second audio. Need not the user and break off the connection of cell-phone and bluetooth headset, only need cell-phone and wisdom screen keep first connection, the audio data that the second audio frequency that wisdom screen broadcast corresponds can be through the cell-phone transmission to bluetooth headset on, bluetooth headset will audio data conversion is the second audio frequency is exported, can avoid the user to switch the time delay that bluetooth headset connects and cause like this, improves user experience.
The intelligent screen can display the audio and simultaneously display a graphical user interface (for example, the intelligent screen can be a video picture), the intelligent screen can send audio data to the mobile phone, the intelligent screen can also convert the graphical user interface into corresponding graphical user interface data to be sent to the mobile phone, and the mobile phone can convert the graphical user interface data into the graphical user interface to be displayed on the mobile phone.
For example, as shown in fig. 4B, when the smart screen receives a user call, the smart screen simultaneously displays thegraphical user interface 303 when the audio played by the smart screen is a ring tone, and thegraphical user interface 303 includes call information (e.g., a call task head) and controls (a receiving control and a rejecting control), the smart screen can send the data of the graphical user interface to the mobile phone, and the mobile phone displays the graphical user interface, so that the user can decide whether to receive the call or reject the call on the mobile phone, and does not need to click the controls in front of the smart screen, thereby improving the user experience.
For example, the mobile phone of the first user and the mobile phone of the second user establish a first connection, the mobile phone of the first user is connected with the Bluetooth sound box, the mobile phone of the second user plays a movie, the audio frequency played by the mobile phone of the second user is the audio frequency of the movie, the mobile phone of the second user can send the picture of the movie to the mobile phone of the second user, the mobile phone of the second user displays the picture of the movie, the mobile phone of the second user sends the audio frequency of the movie to the Bluetooth sound box through the mobile phone of the first user, different users can watch the picture of the movie on the respective mobile phones, and the movie audio frequency is shared through the Bluetooth sound box, so that the user experience can be improved.
The smart screen transmits cross-equipment audio data, and simultaneously sends the picture displayed by the smart screen to the mobile phone for playing, so that a user can watch the picture of the smart screen through the mobile phone simultaneously when using the audio equipment connected with the mobile phone, audio and video sharing is realized, and user experience is enhanced.
In some embodiments, the user may set whether to share the native screen while transmitting audio across the device at the setup interface of the smart screen. For example, as shown in FIG. 4C, in anexemplary setup interface 402, a user may click on acontrol 404 to activate a "share native screen" function while playing audio locally.
In some embodiments, after the smart screen transmits the picture played by the smart screen to the mobile phone for display, the smart screen can be turned off, so that power consumption can be saved.
In some embodiments, the smart screen may also transmit the audio data to a bluetooth headset or bluetooth speaker connected to the local device, or sound through a local speaker.
The intelligent screen outputs audio through the local loudspeaker when performing cross-equipment sounding, or outputs audio through a Bluetooth earphone or a sound box connected with the intelligent screen, so that the audio sharing can be realized without disconnecting the earphone connected with the mobile phone.
In some embodiments, the smart screen may decide whether to transmit graphical user interface data or video picture data based on the type of service of the locally played audio.
For example, when the local audio is generated by an incoming call service, such service usually requires a user to perform interactive operation, the smart screen can transmit graphical user interface data to the mobile phone, so that interaction with the user is facilitated, and user experience is improved; when the local audio is generated by a music service (e.g., playing music through a music application or listening to a program through a radio), such service generally does not require interactive operation by the user, and the smart screen may not transmit graphical user interface data to the cell phone to save power consumption.
In some embodiments, after the smart screen transmits the graphical user interface displayed locally to the mobile phone for display, the smart screen detects that the local power is lower than the first threshold value, and stops sharing the graphical user interface displayed locally.
And the power consumption can be saved by transmitting the graphical user interface displayed by the local machine and detecting the electric quantity of the local machine at the same time and automatically stopping transmitting the graphical user interface when the electric quantity is low.
Optionally, the first connection may include a wired connection, and may also include at least one of WiFi (Wireless Fidelity ) and/or wireless connection modes such as bluetooth, wiFi direct, NFC (Near Field Communication ), and so on.
Alternatively, the mobile phone and the smart screen may be connected through a wireless network sent by the same network device (e.g., a wireless router), that is, the mobile phone and the smart screen are connected to the same WIFI.
Optionally, before the first connection is established between the mobile phone and the smart screen, the user account registered by the mobile phone and the smart screen is the same, and bluetooth and WIFI functions are started on both the mobile phone and the smart screen.
Alternatively, the handset and the smart screen may be connected using UWB (Ultra-wideband) technology.
Optionally, the user account registered by the mobile phone and the smart screen before the first connection is established is the same, wherein the login account information can be checked on a setting page of the mobile phone and the smart screen.
Alternatively, the second connection may comprise at least one of a wireless connection (e.g., a bluetooth connection), a wired connection.
Fig. 5 shows an exemplary graphical user interface for establishing a first connection between a mobile phone and a smart screen.
The mobile phone can be connected with other electronic devices to realize multi-device collaborative management and resource sharing, and one-key collaboration can be realized on nearby electronic devices such as a tablet personal computer, a smart screen and the like, for example, tasks on the mobile phone can be continued on the electronic devices which are collaborative, for example, smooth communication, video watching and the like can be continued on the smart screen, and also the flow can be continued easily on the computer, and files on the mobile phone can be edited.
On the exemplary user interface shown in fig. 5, thearea line 502 divides two areas, and the electronic devices within the area constituted by thearea line 502 represent the electronic device and/or the third electronic device that have established a connection with the mobile phone. For example, a bluetooth headset is within the area, indicating that the bluetooth headset has established a connection with the handset.
The user may drag an icon of the smart screen into theregion line 502. As shown in fig. 6, the smart icon is displayed in a local line, and the smart screen and the mobile phone are connected.
The user can directly drag and connect other paired electronic equipment and mobile phones on the graphical user interface without manually pairing and connecting, so that the operation is convenient and fast, and the user experience is improved.
In some embodiments, before receiving the audio data sent by the smart screen, the mobile phone detects that the mobile phone is connected with the smart screen, and the mobile phone detects whether a bluetooth headset connected with the mobile phone for a second connection is provided, and if so, sends connection information of the bluetooth headset to the smart screen. Optionally, the connection information may include a MAC address of the bluetooth headset.
After detecting to be connected with the wisdom screen, the cell-phone detects the local bluetooth headset connection condition, and the cell-phone can initiatively perceive the bluetooth headset that can stride equipment sound production to inform (for example, through the form of bullet window or notice) the wisdom screen, accomplish initiative perception reporting, be convenient for user selection, avoid the user to look for the process in the wisdom screen loaded down with trivial details, improve user experience.
In some embodiments, before receiving the audio data sent by the smart screen, the mobile phone detects that the mobile phone is connected to the bluetooth headset, and the mobile phone detects whether the mobile phone is connected to the smart screen, and if so, the mobile phone sends the connection information of the mobile phone and the bluetooth headset to the second electronic device.
Optionally, the mobile phone detects that the mobile phone does not perform the smart screen of the first connection, and the mobile phone stores the connection information of the bluetooth headset connected with the mobile phone in the mobile phone.
After detecting being connected with bluetooth headset, detect whether this machine is connected with the smart screen establishment, the cell-phone can be with bluetooth headset's connection information notification smart screen, can avoid the user to seek the third electronic equipment that can stride the equipment sound production through smart screen broadcasting by hand, improves user experience, realizes real-time perception.
In some embodiments, before receiving the audio data sent by the smart screen, the mobile phone receives a query request sent by the smart screen, where the query request is used to request connection information of a bluetooth headset connected to the mobile phone, and the mobile phone detects whether the mobile phone is connected with the bluetooth headset, and if so, sends the connection information of the bluetooth headset to the smart screen.
After the intelligent screen is connected with the mobile phone, the intelligent screen actively inquires the mobile phone of the Bluetooth earphone capable of generating sound across equipment, real-time sensing is achieved, the tedious process that a user manually searches the Bluetooth earphone capable of crossing equipment in the intelligent screen can be avoided, and user experience is improved.
In some embodiments, the mobile phone establishes a first connection with the smart screen, the bluetooth headset establishes a second connection with the mobile phone, the smart screen plays audio (audio generated by an incoming call or music or video played) and sends a request message to the bluetooth headset through the mobile phone, the request message is used for asking whether the bluetooth headset is in a wearing state, the bluetooth headset detects that the user does not wear the headset through the sensor, and sends a feedback message to the mobile phone, the mobile phone gives the smart screen again, and the smart screen detects that the user does not wear the headset and sounds through a speaker of the mobile phone.
Whether the user wears the Bluetooth headset or not is detected, different sounding devices are intelligently selected to sound, important incoming call notifications can be prevented from being missed by the user due to the fact that the user does not wear the headset and sounds through the headset, and user experience is improved.
In some embodiments, before the mobile phone receives the audio data sent by the smart screen, the mobile phone plays the first audio, the mobile phone sends the audio data of the first audio to the bluetooth headset, the mobile phone detects that the second audio is playing on the smart screen, and the mobile phone can pause the audio data corresponding to the first audio from being transmitted to the bluetooth headset.
Therefore, the situation that two electronic devices send audio data to the Bluetooth headset at the same time to cause mutual interference can be avoided, and the audio data on the back way is determined to be the audio which the user wants to hear.
In some embodiments, the handset and the smart screen establish a first connection, the handset and the bluetooth headset establish a second connection, the smart screen displays a graphical user interface, the smart screen receives a user-entered instruction (e.g., as shown in fig. 7, the user is detected clicking on control 701), and the bluetooth headset is determined to be the sound emitting device of the smart screen.
The first connection is established between the mobile phone and the intelligent screen, the second connection is established between the mobile phone and the Bluetooth headset, the intelligent screen displays a graphical user interface, real-time sensing can be achieved, a user is informed of whether equipment capable of sounding across equipment exists, user selection is facilitated, and user experience is improved.
After the smart screen receives the bluetooth headset connection information sent by the mobile phone, the smart screen may display a graphical user interface as shown in fig. 7. The content of the graphical user interface shown in fig. 7 may also be displayed in the form of a notification (e.g., at the top of the screen or popping up a notification message) so that interference to the user may be avoided.
In some embodiments, the handset and the smart screen establish a first connection, the handset and the bluetooth headset establish a second connection, the handset receives a user-entered instruction (e.g., detecting the user clicking on control 801), and the locally connected bluetooth headset is configured as a sound emitting device for the smart screen.
The user can select whether to allow the smart screen to sound through the Bluetooth headset on the mobile phone, and the mobile phone can be used as a control center for control, so that user experience is improved.
In some embodiments, in addition to establishing the first connection with the smart screen, the mobile phone establishes a third connection with the fourth electronic device, for example, the fourth electronic device may be a mobile phone, a smart screen, a tablet computer, etc., and the mobile phone may display a graphical user interface as shown in fig. 9 after being connected with the smart screen, the tablet computer, etc., where the description of the third connection may refer to the description of the second connection, which is not repeated herein.
In some embodiments, after the third connection is established between the mobile phone and the tablet computer, the smart screen and the tablet computer play audio simultaneously, and the mobile phone can detect which electronic device is playing audio, receive audio data sent by the corresponding electronic device, and send the audio data to the bluetooth headset.
In some embodiments, as shown in fig. 10A, after the third connection is established between the mobile phone and the tablet computer, if the tablet computer plays the audio when the smart screen plays the audio, the mobile phone stops receiving the audio data of the third audio sent by the smart screen, the mobile phone receives the audio data of the third audio sent by the tablet computer and sends the audio data sent by the tablet computer to the mobile phone, the mobile phone sends the audio data to the bluetooth headset, and the bluetooth headset converts the audio data of the third audio into the third audio to be output.
Optionally, the mobile phone may display a graphical user interface, and receive an instruction input by a user (set a corresponding audio data source device), receive audio data sent by the audio data source device, and send the audio data to the bluetooth headset.
For example, as illustrated in the schematic graphical user interface of fig. 10B, the electronic device connected to the mobile phone includes a smart screen and a tablet computer, the mobile phone may receive an instruction input by the user (e.g., the user clicks the selection control 1001), select the smart screen as the audio data source device, and receive audio data sent by the smart screen, and send the audio data to the bluetooth headset.
By setting the graphic user interface on the mobile phone, the user selects the audio data source equipment on the graphic user interface, the user can grasp the initiative and select to the audio data to be received, and the conflict of the audio data transmission and transmission of the two electronic equipment can be avoided.
Optionally, the mobile phone is connected with the smart screen and the tablet computer respectively, the smart screen and the tablet computer play audio simultaneously, and the mobile phone can determine the audio data source equipment to be received according to the priorities of the audio service types played by different electronic equipment.
Optionally, the audio service type may include a call service, a video service, a music service, a notification service
Optionally, the call service has a higher priority than the music service and the video service.
Optionally, the notification service has a higher priority than the music service.
Optionally, the notification service may include a notification service generated by an alarm clock or a notification service generated by a reminder.
For example, when the smart screen is playing music, the mobile phone receives audio data of the music service sent by the smart screen, and the mobile phone transmits the audio data to the Bluetooth headset. The mobile phone detects that the tablet personal computer has an incoming call service (or the tablet personal computer notifies the mobile phone when the tablet personal computer has the incoming call service), the mobile phone compares the music service of the intelligent screen with the priority of the incoming call service of the tablet personal computer, determines that the priority of the incoming call service is higher than that of the music service, stops receiving the audio data of the intelligent screen, receives the audio data corresponding to the incoming call service of the tablet personal computer, sends the audio data of the tablet personal computer to the Bluetooth headset, and a user can talk through the Bluetooth headset.
The priority of the alarm clock service is set to be higher than the priority of the music service, when a user receives music played on the intelligent screen through the Bluetooth headset, the tablet personal computer has reminding items, or the alarm clock on the tablet personal computer is started, the mobile phone stops transmitting the data of the music service of the intelligent screen to the headset, and the reminding service on the tablet personal computer and the audio data corresponding to the alarm clock service are preferentially transmitted to the headset so as to remind the user, and the user is prevented from forgetting to do items due to immersion in the music.
By judging the service priority, the audio to be output by the third electronic device can be judged according to the priorities of the users to different services, so that the manual selection of the users is avoided, and the user experience can be improved.
Alternatively, the handset may determine the audio data source device based on the priority of the different electronic devices. For example, when the smart screen and the tablet computer play audio simultaneously, the mobile phone may determine the audio data source device according to the priority between the smart screen and the tablet computer.
By judging the priorities of different electronic devices, the sounding requirements of the electronic devices with high priorities can be preferentially met, manual switching of users is avoided, and user experience is improved.
Optionally, the mobile phone can periodically count the frequencies of the connection between the mobile phone and the smart screen and the tablet computer respectively, and perform priority ranking according to the frequencies. For example, the mobile phone counts the connection frequency of the local and smart screens and tablet computers in a week (or a month), finds that the frequency of the connection between the local and smart screens is higher, and can determine that the user is more required to interact with the smart screens, so that the priority of determining the smart screens is higher, and the smart screens are set as audio data source equipment.
The priority of different electronic devices is updated by periodically counting the frequency of connection with the different electronic devices, so that the preference of a user can be matched, the manual adjustment of the user is avoided, and the user experience is improved.
Alternatively, as shown in fig. 11, after the handset switches audio data source devices, anotification message 1101 may be displayed on the handset, including "the audio data source device has been switched to theelectronic device 104".
Optionally, acontrol 1103 may also be included onmessage 1101. The user may click oncontrol 1103 to cancel the switching of the audio data source device. For example, the mobile phone is connected with an earphone, a smart screen and a tablet computer, audio data of the smart screen is transmitted to the earphone through the mobile phone at a first moment, the tablet computer sounds at a second moment, the mobile phone stops receiving the audio data of the smart screen, a notification that the audio data source equipment is switched to the tablet computer is displayed on the mobile phone, a user can select to click on thecontrol 1103, the switching is canceled, and the mobile phone continues to accept the audio data of the smart screen.
Optionally, after the mobile phone switches the audio data source device, the current audio data source device may be displayed on the mobile phone in a status bar. Therefore, the interference to the user can be avoided, the user can also quickly position the current audio data source equipment through the identification on the status bar, and the situation that which electronic equipment is playing audio is searched in a plurality of electronic equipment connected with the mobile phone is avoided.
Optionally, the mobile phone may receive an instruction input by the user, and adjust priorities of the second electronic device and the fourth electronic device.
It should be appreciated that in some other embodiments, the method and the beneficial effect of selecting the source of audio data by the mobile phone may invoke the situation where three electronic devices are connected (the scenario where the mobile phone is connected to the smart screen and the tablet respectively) when the plurality of other electronic devices play audio simultaneously.
In some embodiments, the smart screen sounds through a bluetooth headset connected to the mobile phone, the mobile phone detects that the mobile phone is playing the first audio, and the mobile phone can stop receiving the audio data sent by the smart screen and send the audio data corresponding to the first audio played by the mobile phone to the bluetooth headset.
For example, the handset may display a graphical user interface as shown in FIG. 12, detect a user input instruction (e.g., the user clicks control 1201), stop receiving audio data sent by the smart screen (or tablet) and send the audio data played by the handset to the Bluetooth headset.
Optionally, after the mobile phone detects that the mobile phone plays the first audio, the mobile phone can determine to respectively determine priorities of the audio service played by the mobile phone and the audio service played by the smart screen (or a tablet computer connected with the mobile phone), and determine whether to switch the transmitted audio data. The description of the service and priority may refer to the related descriptions of the embodiments described in fig. 10A-10B, and are not repeated herein.
Optionally, the mobile phone detects that the mobile phone does not play audio any more, and can resume receiving audio data corresponding to the audio played by the smart screen (or the tablet computer).
When the user listens to music from the intelligent screen through the Bluetooth headset (the intelligent screen transmits audio data corresponding to the music to the mobile phone, the mobile phone transmits the audio data to the Bluetooth headset), and the user can automatically recover and receive the data transmitted by the intelligent screen and transmit the data to the Bluetooth headset after the call is ended when the user detects the incoming call service from the mobile phone, the user can be prevented from manually recovering the data, and the user experience is improved.
In some embodiments, the smart screen sounds through the bluetooth headset connected to the mobile phone, the mobile phone detects that the mobile phone has an incoming call, and the mobile phone can play the incoming call audio of the mobile phone through the speaker of the mobile phone, so that the audio data sent by the smart screen is ensured not to be interrupted. If the third electronic device is a speaker device with external sound, such as a bluetooth speaker, the mobile phone plays the incoming call through the local speaker, so that privacy disclosure can be avoided.
In some embodiments, the user may control the smart screen (or tablet) through the bluetooth headset.
The user can control audio playback on the smart screen (e.g., previous, next, pause, etc. operations in a music playback application) through touch operations on the bluetooth headset. For example, the touch input of the user on the Bluetooth headset can be converted into instruction data to be sent to the mobile phone, and the mobile phone sends the instruction data to the intelligent screen, so that the Bluetooth headset can control the intelligent screen.
The Bluetooth headset can collect voice signals of a user, convert the voice signals into audio data, send the audio data to the mobile phone, send the audio data to the intelligent screen, and convert the audio data into operation instructions.
In some embodiments, the Bluetooth headset further comprises a sensor, the Bluetooth headset transmits data acquired by the sensor to the mobile phone, and the mobile phone transmits the data to the smart screen.
For example, the user is watching a training course body-building played by the smart screen, the audio data played by the smart screen is sent to the mobile phone, the mobile phone sends the audio data to the Bluetooth headset, the Bluetooth headset can collect physiological information (such as heart rate and blood oxygen saturation) of the user, the physiological information is sent to the mobile phone, the mobile phone sends the physiological information to the smart screen again, the smart screen can evaluate according to the physiological information of the user, and the played training course is adjusted, wherein the adjustment comprises reducing the movement rhythm or pausing, or giving a prompt (for example, prompting the user that the heart rate is too high by words or prompting the user that the heart rate is too high by voices).
Fig. 13 is a method for cross-device audio data transmission, provided in an embodiment of the present application, applied to a first electronic device, including:
establishing a first connection with a second electronic device and establishing a second connection with a third electronic device, wherein the first electronic device and the second electronic device both comprise a display screen, and the third electronic device comprises a loudspeaker;
receiving audio data sent by the second electronic equipment, wherein the audio data is audio data corresponding to audio played by the second electronic equipment;
receiving graphical user interface data sent by the second electronic device, wherein the graphical user interface data is the data of a graphical user interface displayed when the second electronic device plays audio;
transmitting the audio data to the third electronic device, so that the third electronic device converts the audio data into audio output;
and converting the graphical user interface data into a graphical user interface to be displayed on a display screen of the local machine.
For example, the mobile phone is connected with the smart screen, the mobile phone is connected with the earphone through Bluetooth, the smart screen transmits audio data corresponding to the second audio played by the mobile phone to the mobile phone, the mobile phone transmits the audio data of the second audio to the earphone, and the earphone converts the audio data and then outputs the second audio.
The second audio played by the second electronic device is transmitted to the third electronic device of the first electronic device through the first electronic device to be played, and delay caused by switching connection and re-pairing of the third electronic device can be avoided. The user does not need to pair the third electronic equipment with the second electronic equipment, and only needs the first electronic equipment to be connected with the third electronic equipment, so that the user can receive the audio played by the second electronic equipment through the third electronic equipment, and the user experience is improved.
The second electronic equipment can display the graphic user interface while playing the audio, and the second electronic equipment can also send the graphic user interface data to the mobile phone, and after receiving the graphic user interface data, the mobile phone converts the graphic user interface data into the graphic user interface and displays the graphic user interface on a display screen of the mobile phone.
For example, as shown in fig. 4B, when the smart screen receives a user call, the audio played by the smart screen is an incoming call bell, and meanwhile, thegraphical user interface 303 is displayed, thegraphical user interface 303 includes incoming call information (for example, an incoming call task head) and controls (an answer control and a reject control), the smart screen can send corresponding data on the graphical user interface to the mobile phone, the mobile phone displays the graphical user interface, the user can decide whether to answer the call or reject the call on the mobile phone, and can avoid clicking the controls in front of the smart screen, so that the user experience can be improved.
For example, in a scenario where the first electronic device and the second electronic device are both mobile phones, for example, the first electronic device is a mobile phone of the user a, the second electronic device is a mobile phone of the user B, and the user B may transmit a movie picture played by the user B to the mobile phone of the user a to play, and play audio of a movie played by the user B through a speaker connected to the mobile phone of the user a, so that sharing of audio and video of the movie may be achieved.
Optionally, the first connection may include a wired connection, and may also include at least one of WiFi (Wireless Fidelity ), bluetooth, wiFi direct, NFC (Near Field Communication ), and other wireless connection manners.
Alternatively, the first connection may comprise a connection through a wireless network issued by the same network device (e.g., a wireless router). For example, the first electronic device and the second electronic device are both connected to a wireless network from the same wireless router.
The first electronic device and the second electronic device are connected to the same wireless network, and as a plurality of other electronic devices are connected to the same wireless network, the second electronic device transmits the audio data to the first electronic device and simultaneously transmits the graphic user interface data or the video picture data to the first electronic device, so that a user can operate on the computer, interaction with application in the second electronic device (or interaction with the second electronic device) is realized, and the user can be prevented from searching the plurality of electronic devices for the electronic device for playing the audio. For example, the user returns home with the earphone, the earphone is connected with the mobile phone, the mobile phone is connected with a smart screen, a tablet computer and a personal computer in the home, the mobile phone can send the incoming call audio data and the incoming call picture data to the mobile phone when the smart screen is in a call, the mobile phone sends the audio data to the earphone, the user can hear the incoming call bell through the earphone, and the user can decide whether to answer the incoming call through the mobile phone, so that the user is prevented from searching for the incoming call equipment in a plurality of electronic equipment, and the user experience is improved.
Alternatively, the first electronic device and the second electronic device may implement the first connection using an Ultra-wideband (UWB) connection.
Optionally, before the first electronic device and the second electronic device establish the first connection, the user accounts logged in on the mobile phone and the smart screen are the same, and bluetooth and WIFI functions are started on both the mobile phone and the smart screen.
Optionally, the second connection comprises at least one of a wireless connection, a wired connection. For example, the second connection may be a bluetooth connection.
Optionally, before the first electronic device receives the audio data sent by the second electronic device, it detects that the local device is connected with the second electronic device, and sends connection information of a third electronic device to the second electronic device, where the third electronic device establishes a second connection with the local device.
Alternatively, the connection information may include a MAC address.
Optionally, before the first electronic device receives the audio data sent by the second electronic device, it detects that the local device is connected with the third electronic device, and sends connection information of the third electronic device to the second electronic device, where the first electronic device and the second electronic device establish a first connection.
Alternatively, the connection information may include a MAC address.
Optionally, before receiving the audio data sent by the second electronic device, the first electronic device receives a query request sent by the second electronic device, and sends connection information of the third electronic device to the smart screen, where the query request is used to request the connection information of the third electronic device, and the third electronic device establishes a second connection with the local device.
Alternatively, the connection information may include a MAC address.
Optionally, the first electronic device further establishes a third connection with the fourth electronic device, displays a graphical user interface locally, receives an instruction input by a user, and determines an audio data source device, wherein the audio data source device comprises the second electronic device and the fourth electronic device, receives audio data sent by the audio data source device, and sends the audio data to the third electronic device.
Optionally, the local machine further establishes a third connection with the fourth electronic device, detects that the second electronic device and the fourth electronic device play audio simultaneously, and determines an audio data source device to be received according to the priority of the audio service types played by the second electronic device and the fourth electronic device, wherein the audio data source device comprises the second electronic device and the fourth electronic device.
Optionally, the audio service type may include a call service, a video service, a music service, and a notification service.
Optionally, the call service has a higher priority than the music service and the video service.
Optionally, the notification service has a higher priority than the music service.
Optionally, the notification service may include a notification service generated by an alarm clock or a notification service generated by a reminder.
Optionally, the local device further establishes a third connection with the fourth electronic device, detects that the second electronic device and the fourth electronic device play audio simultaneously, and determines an audio data source device to be received according to the priorities of the second electronic device and the fourth electronic device, wherein the audio data source device comprises the second electronic device and the fourth electronic device.
Optionally, the local periodically counts the frequencies of the local connection with the second electronic device and the fourth electronic device respectively, and performs priority ranking according to the frequencies.
Optionally, after the data source device is switched, the first electronic device displays a notification message, where the notification message is used to notify the user of the current data source device.
Optionally, after the data source device is switched, the first electronic device displays a notification message, where the notification message includes the first control, and the first electronic device cancels the data source device switching when detecting the instruction input by the user.
Fig. 14 is a method for cross-device audio data transmission, provided in an embodiment of the present application, applied to a second electronic device, including:
establishing a first connection with a first electronic device;
the method comprises the steps that the fact that the local machine plays audio is detected, audio data corresponding to the audio are sent to first electronic equipment, so that the first electronic equipment sends the audio data to third electronic equipment, and second connection is established between the third electronic equipment and the first electronic equipment;
and sending the graphical user interface data to the first electronic device so that the first electronic device displays a graphical user interface, wherein the graphical user interface data is the data of the graphical user interface displayed when the audio is played locally.
Optionally, before sending the audio data to the first electronic device, sending a query request to the first electronic device, where the query request is used to request connection information of a third electronic device connected to the first electronic device.
Alternatively, the user may set, at the setup interface of the second electronic device, whether to share the native screen while transmitting audio across the devices.
For example, in theillustrative setup interface 402, a user may click oncontrol 404 to activate a "share native screen" function while playing audio locally.
By the user selecting whether or not to transmit the graphic user interface data while transmitting the audio data, power consumption can be saved.
Optionally, the second electronic device may decide whether to transmit the gui data according to the service type of the audio played locally.
For example, when the local audio is generated by an incoming call service, such service generally requires a user to perform an interactive operation, the second electronic device may transmit the graphical user interface data to the first electronic device, so as to facilitate interaction with the user and improve user experience; when the local audio is generated by a music service (e.g., the second electronic device is playing music through a music application or listening to a program through a radio), such a service generally does not require interactive operation by the user, the second electronic device may not transmit graphical user interface data to the first electronic device to save power consumption.
Optionally, after the second electronic device transmits the picture (for example, a graphical user interface) displayed by the local device to the first electronic device for playing, the second electronic device may turn off the screen, so that power consumption may be saved.
Optionally, after the second electronic device transmits the picture (for example, a graphical user interface) displayed by the local device to the first electronic device for display, the second electronic device may detect the electric quantity of the local device, and may stop transmitting the graphical user interface data when detecting that the electric quantity of the local device is lower than the first threshold, so that power consumption may be saved.
In the case of dividing the respective functional modules by the respective functions, fig. 15 is a schematic diagram showing one possible composition of the first electronic device involved in the above-described embodiment, including:
a connection unit: for establishing a first connection with a second electronic device, and/or for establishing a second connection with a third electronic device, and/or for establishing a third connection with a fourth electronic device, and/or for other processes of the techniques described herein.
A receiving unit: for receiving audio data transmitted by the second electronic device, and/or for receiving graphical user interface data transmitted by the second electronic device, and/or for receiving a query request transmitted by the second electronic device, and/or for receiving audio data or instruction data transmitted by the third electronic device, and/or for other processes of the techniques described herein.
A transmitting unit: for transmitting audio data to a third electronic device, and/or for transmitting connection information of the third electronic device to a second electronic device, and/or other processes for the techniques described herein.
And a display unit: for displaying a graphical user interface.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
In case an integrated unit is employed, the electronic device may comprise a processing module, a storage module and a communication module. The processing module may be configured to control and manage actions of the electronic device, for example, may be configured to support the electronic device to perform the steps performed by the connection unit, the receiving unit, the sending unit, the comparing unit, and the statistics unit. The memory module may be used to support the electronic device to execute stored program code, data, etc. And the communication module can be used for supporting the communication between the electronic device and other devices.
Wherein the processing module may be a processor or a controller. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. A processor may also be a combination that performs computing functions, e.g., including one or more microprocessors, digital signal processing (digital signal processing, DSP) and microprocessor combinations, and the like. The memory module may be a memory. The communication module can be a radio frequency circuit, a Bluetooth chip, a Wi-Fi chip and other equipment which interact with other electronic equipment.
In one embodiment, when the processing module is a processor and the storage module is a memory, the electronic device according to this embodiment may be a device having the structure shown in fig. 1.
In the case of dividing the respective functional modules by the respective functions, fig. 16 is a schematic diagram showing one possible composition of the second electronic device involved in the above-described embodiment, including:
a connection unit for establishing a first connection with a first electronic device, and/or for other processes of the techniques described herein.
And a receiving unit configured to receive third electronic device connection information sent by the first electronic device, where the first electronic device and the third electronic device establish a second connection, and/or other processes for the techniques described herein.
And a transmitting unit for transmitting audio data of the audio played by the local to the first electronic device, and/or for transmitting data of the graphical user interface displayed by the local to the first electronic device, and/or for other processes of the technology described herein.
A playback unit for playing back native audio, and/or other processes for the techniques described herein.
And a display unit: for displaying a graphical user interface.
Fig. 17 is a system provided in an embodiment of the present application, where the system may include a first electronic device, a second electronic device, and a third electronic device provided in an embodiment of the present application.
The present embodiment also provides a computer storage medium having stored therein computer instructions which, when executed on an electronic device, cause the electronic device to perform the above-described related method steps to implement the method for cross-device audio data transmission in the above-described embodiments.
The present embodiment also provides a computer program product which, when run on a computer, causes the computer to perform the above-described related steps to implement the method of cross-device audio data transmission in the above-described embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component, or a module, and may include a processor and a memory connected to each other; the memory is configured to store computer-executable instructions, and when the apparatus is running, the processor may execute the computer-executable instructions stored in the memory, so that the chip performs the method for transmitting the cross-device audio data in the above method embodiments.
The electronic device, the computer storage medium, the computer program product, or the chip provided in this embodiment are used to execute the corresponding methods provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding methods provided above, and will not be described herein.
It will be appreciated by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily go to the modification or substitution within the technical scope of the present application, and the modification or substitution should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.