Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a split-type mobile terminal according to an embodiment of the present disclosure. As shown in fig. 1, the split typemobile terminal 10 of the present embodiment includes ahost terminal 11 and ascreen terminal 12 that are detachable and can communicate with each other. Thehost end 11 can perform information processing of communication, for example, send or receive information sent by the network to the network, and then send the information to thescreen end 12, so that the user can receive required information only by holding thescreen end 12, for example, can make a call, watch video and the like through thescreen end 12. Thedetachable host end 11 and thescreen end 12 can be selectively assembled together or detached according to the use condition. For example, if the range of motion of the user is small, such as in a home, thehost end 11 and thescreen end 12 may be separated, thehost end 11 is placed at a fixed position, the user only needs to hold thescreen end 12, and thescreen end 12 with a small size may be convenient for the user to use. For another example, if the user needs to go out, thehost end 11 and thescreen end 12 can be assembled together and then carried, which is convenient for the user to carry.
Thehost 11 may include a WiFi (Wireless Fidelity)module 111, a Bluetooth (BT)module 112, a GPS (Global Positioning System)module 113, aSystem chip 114, abattery 115, aradio frequency module 116, astorage module 117, and avoice module 118.
Thescreen end 12 may include aWiFi module 121, a Bluetooth (BT)module 122, aGPS module 123, asystem chip 124, abattery 125, aSIM card 126, and avoice module 127.
The host end 11 and thescreen end 12 can establish a communication connection through theWiFi modules 111 and 121, and can also establish a communication connection through thebluetooth modules 112 and 122. Specifically, when the host end 11 and thescreen end 12 are far away from each other, a communication connection can be established through theWiFi modules 111 and 121; when thehost terminal 11 and thescreen terminal 12 are close to each other, a communication connection can be established through theWiFi modules 111 and 121 or through thebluetooth modules 112 and 122.
TheGPS module 113 is used to locate the position of thehost end 11.
Thesystem chip 114 may include a baseband processor (AP) and a modem (modem). The baseband processor can decode the received baseband signal, and the modem mainly completes the Gaussian Minimum Shift Keying (GMSK) modulation/demodulation mode required by the GSM system.
Thebattery 115 is used to supply power to other components of thehost end 11.
Therf module 116 includes an rf Front End Module (FEM), which specifically includes an Antenna Tuner (Antenna Tuner), an rf switch, a PA module, and so on. For receiving and transmitting radio frequency signals.
With the popularity of CMOS RFICs (complementary metal oxide semiconductor radio frequency integrated circuits), more and more modules are being moved from discrete devices to integrated circuits. However, there are devices that for various reasons cannot currently be integrated onto conventional CMOS RFICs. These radio frequency devices that cannot be integrated onto RFICs are commonly referred to as radio frequency front end modules (RF FEMs). The FEM is further from the baseband and closer to the antenna.
Thestorage module 117 includes a RAM (random access Memory) and a ROM (Read-Only Memory). The RAM loses its memory contents when power is off, and is therefore mainly used for storing programs for short-term use. A ROM is a solid-state semiconductor memory that can only read out data stored in advance.
Thevoice module 118 is used for receiving a voice signal of a user.
TheGPS module 113 is used to locate the position of thescreen end 12.
The system-on-chip 124 is used to process the received information for display at thescreen end 12.
Thebattery 125 is used to provide power to other components of thescreen end 12.
A SIM card 126(Subscriber identity Module) GSM digital mobile phone must be equipped with theSIM card 126 for use.
TheSIM card 126 stores the information of the digital mobile phone user, the encrypted key, the phone book of the user, and other contents, which can be used for the identification of the GSM network user and for encrypting the voice information when the user is talking.
In another embodiment, a SIM card may also be provided in thehost end 11. That is, based on the communication connection between thehost end 11 and thescreen end 12, the telephone function of the user can be realized by only one of them setting the SIM card.
Thevoice module 127 is used for receiving a voice signal of a user.
Because thehost end 11 and thescreen end 12 both include the voice module, that is, both thehost end 11 and thescreen end 12 can be used to receive the voice of the user, the main body with better sound receiving effect can be selected to execute the sound receiving operation according to the actual situation in the using process.
In a specific application, the input of the user is received first, and then the body of the sound reception selected by the user is determined according to the input, that is, whether the body of the sound reception selected by the user is thescreen end 12 or thehost end 11 is determined. If the user selects the main body of the radio receiver to be thescreen terminal 12, thesecond voice module 127 is used to receive the voice of the user, and further the communication module such as theWiFi module 121 or thebluetooth module 122 is used to send the received voice to thehost terminal 11, so as to be processed by thehost terminal 11, for example, theradio frequency module 116 of thehost terminal 11 can send the voice.
If the user selects the main body of the radio receiver to be thehost end 11, the voice of the user is received through thefirst voice module 118, and the voice can be further transmitted through theradio frequency module 116.
The user input can be input on thescreen end 12 by touching the screen, or can be input on thescreen end 12 by means of keys or voice. Similarly, the input may be performed by a key or voice on thehost terminal 11.
The mode of determining that the user selects the radio reception subject may be determined according to a preset rule, for example, the preset rule is: if the user only clicks the corresponding dialing or connecting button when dialing or receiving a call, thescreen terminal 12 is selected by default to receive the sound, and if the user clicks the hands-free button after clicking the corresponding dialing or connecting button, thehost terminal 11 is selected by default to receive the sound.
The mode of determining the sound reception subject selected by the user can also be used for providing a selection window for the user to select when the sound reception is needed, for example, a window can be popped up when a call is made, and the user is prompted to select the sound reception of thehost end 11 or the sound reception of thescreen end 12.
In one embodiment,voice modules 118 and 127 may include microphones, as shown in FIG. 2. One of themicrophone 1181 at thehost end 11 and themicrophone 1271 at thescreen end 12 is a unidirectional microphone, the other is an omnidirectional microphone, or both are unidirectional microphones, or both are omnidirectional microphones.
In one embodiment,microphone 1181 may be configured as an omni-directional microphone andmicrophone 1271 may be configured as a unidirectional microphone. Then in the case where the user makes a call while holding thescreen end 12, i.e., in the case where hands-free is not selected, the subject selected by the user to receive the sound can be directly defaulted to themicrophone 1271 of thescreen end 12. When a call is made in the hands-free scene, themicrophone 1181 of thehost end 11 with better sound receiving effect can be used for sound reception. In the meeting recording scene, themicrophone 1181 of thehost end 11 with better sound receiving effect can be used for sound reception, and thescreen end 12 can perform normal display operation.
In one embodiment, as shown in fig. 3, thescreen side 12 further includes a wake-upmodule 128, and thehost side 11 includes a wake-upmodule 119. When the user selects the main body of the sound reception to be thehost end 11 and thescreen end 12, themicrophone 1181 or themicrophone 1271 receives the voice of the user, and displays a wake-up prompt to the user through the wake-upmodule 128 and the wake-upmodule 119 after the voice is successfully received.
That is, the user can also select thehost end 11 and thescreen end 12 to receive the voice together, and the determination process can be determined by a preset specific statement or a specific rule, for example, by a specific statement like "siri, xiana, Google Assistant, rhinoceros" and so on to determine that the subject of the sound reception selected by the user is thehost end 11 and thescreen end 12.
If themicrophone 1271 or themicrophone 1181 receives a specific sentence indicating that the user selects thehost end 11 and thescreen end 12 to receive the voice together, a wake-up prompt is displayed to the user through the wake-up module.
The wake-up module may include a low power processor that may be turned on at all times when the user does not select any of the subjects to receive sound. The microphone is always on. The wake-up module may further include a prompting element, such as a flash or a motor, which may alert the user by flashing the flash or vibrating the motor.
In one particular application, the processors of thewake modules 119 and 128 are in an on state. If the user cannot find the split mobile terminal, a specific wake-up statement can be sent to the surroundings, such as: siri. If theomnidirectional microphone 1181 receives the wakeup statement, it is determined that the main body of the user selecting radio reception can be thehost end 11 and thescreen end 12, and the voice is sent to thecorresponding wakeup module 119, and the processor in thewakeup module 119 controls the corresponding prompt element to send a prompt to the user, and further sends the wakeup statement to thescreen end 12 through theWiFi module 111 or thebluetooth module 112, so as to send a prompt to the user through the processor and the prompt element in the wakeup module in thescreen end 12, and help the user to find the split type mobile terminal.
Similarly, if themicrophone 1271 receives the wake-up statement, the principle of the wake-up statement is the same as that of the wake-up statement received by themicrophone 1181, and the description thereof is omitted here.
In an embodiment, thevoice modules 127 and 118 may further include a speaker. As shown in fig. 4, in comparison with the split type mobile terminal shown in fig. 2, each of thevoice modules 127 and 118 further includes a speaker. That is,voice module 118 may include amicrophone 1181 and aspeaker 1182, andvoice module 127 may include amicrophone 1271 and aspeaker 1272.
Themicrophone 1181 receives the voice of the user and sends the voice to thescreen end 12 for amplification through thespeaker 1272 when the user selects the main body of sound reception as thehost end 11. When the user selects the main body of sound reception as thescreen end 12, themicrophone 1271 receives the voice of the user and sends the voice to thehost end 11 to be amplified through thespeaker 1182.
The manner in which the user selects the main body of the radio receiver as thehost terminal 11 or thescreen terminal 12 is as described above, and is not described herein again.
In a particular application, short-range free speech may be implemented. Specifically, when the user holds thescreen end 12 and thehost end 11, themicrophone 1181 of thehost end 11 receives the sound of the user holding thehost end 11 and sends the sound to thescreen end 12 to generate sound through thespeaker 1272 of thescreen end 12, and similarly, themicrophone 1271 of thescreen end 12 can receive the sound of the user holding thescreen end 12 and send the sound to thehost end 11 to generate sound through thespeaker 1182 of thehost end 11. Telephone functions can be implemented over short distances.
In another specific application, thescreen end 12 may be used as a handheld microphone, thehost end 11 may be used as a sound box, and the user speaking to thescreen end 12 is amplified and emitted by thespeaker 1182 on thehost end 11, so as to achieve the effect of amplifying sound.
It is understood that, if cost saving is considered, speakers may be provided only on thehost end 11 or thescreen end 12.
It can be understood that the split-type mobile terminal in fig. 4 may also be added with the wake-up module shown in fig. 3, and when finding a mobile phone, the user may be prompted by amplifying the sound through the speaker.
As mentioned earlier, split type mobile terminal of this application can separatescreen end 12 andhost computer end 11 to alleviate the weight and the volume ofscreen end 12, convenience of customers uses, and in addition,accessible voice module 127 or 118 are free carries out the receipt of user's pronunciation, and convenience of customers uses, improves user experience.
The split type mobile terminal provided by the embodiment of the application can comprise electronic devices such as a smart phone, a tablet computer and vehicle-mounted electronic equipment.
The voice control method thereof will be described based on the split type mobile terminal described above.
Referring to fig. 5, fig. 5 is a flowchart illustrating a voice control method of a split-type mobile terminal according to an embodiment of the present application. It should be noted that the control method shown in fig. 5 is based on the split-type mobile terminal shown in fig. 2. As shown in fig. 5, the voice control method of the present embodiment includes the following steps:
step 51: an input from a user is received.
The user input may be input on thescreen end 12 by touching the screen, or may be input on thescreen end 12 by pressing keys or by voice. Similarly, the input may be performed by a key or voice on thehost terminal 11.
Step 52: and determining the subject selected by the user to receive the sound according to the input of the user.
The step may specifically be performed according to a preset rule to determine the main body of the radio, for example, the preset rule is: if the user only clicks the corresponding dialing or connecting button when dialing or receiving a call, thescreen terminal 12 is selected by default to receive the sound, and if the user clicks the hands-free button after clicking the corresponding dialing or connecting button, thehost terminal 11 is selected by default to receive the sound.
The mode of determining the main body of the sound reception selected by the user can also be used for providing a selection window for the user to select when the sound reception is needed, for example, a window can be popped up when a call is made, and the user is prompted to select the sound reception at thehost end 11 or the sound reception at thescreen end 12.
In this step, if the user selects the body of the radio reception as thescreen end 12, the process jumps to step 53, and if the user selects the body of the radio reception as thehost end 11, the process jumps to step 54.
Step 53: the speech for use is received by the second speech module and sent to thehost end 11. For processing by thehost end 11, for example, voice may be transmitted through therf module 116 of thehost end 11.
It should be noted that the second voice module is thevoice module 127 in the previous embodiment.
Step 54: the speech for is received by the first speech module. It should be noted that the first voice module is thevoice module 118 in the previous embodiment.
Thescreen end 12 can normally display images to the user while receiving sound through thevoice module 118 of thehost end 11.
In one embodiment,voice modules 118 and 127 may include microphones. One of themicrophone 1181 at thehost end 11 and themicrophone 1271 at thescreen end 12 is a unidirectional microphone, the other is an omnidirectional microphone, or both are unidirectional microphones, or both are omnidirectional microphones.
In one embodiment,microphone 1181 may be configured as an omni-directional microphone andmicrophone 1271 may be configured as a unidirectional microphone. Then in the case of the user making a call while holding thescreen end 12, if thestep 51 receives a situation that the user does not select hands-free, it may be determined that the subject selected for the user to receive the sound is themicrophone 1271 of thescreen end 12. Receiving the input of the user selecting the telephone call in the hands-free scene instep 51, the main body of the sound reception selected by the user can be determined as themicrophone 1181 of thehost end 11 with better sound reception effect. If the user input received instep 51 is the conference recording scene, it may be determined that the main body for selecting the sound reception for the user is themicrophone 1181 of thehost end 11 with better sound reception effect, and thescreen end 12 may perform normal display operation.
Therefore, by the above mode, the voice of the user can be freely received through thevoice module 127 or 118, so that the use of the user is facilitated, and the user experience is improved.
Referring to fig. 6, fig. 6 is a flowchart illustrating a voice control method for a split-type mobile terminal according to another embodiment of the present application. It should be noted that the control method shown in fig. 6 is based on the split-type mobile terminal shown in fig. 3 above. As shown in fig. 6, the voice control method of the present embodiment includes the following steps:
step 61: and starting the first awakening module and the second awakening module.
The first wake-up module is the wake-upmodule 119 of the previous paragraph, and the second wake-up module is the wake-upmodule 128 of the previous paragraph.
The wake-up module may include a low power processor that may be turned on at all times when the user does not select any of the subjects to receive sound. The microphone is always on. The wake-up module may further include a prompting element, such as a flash or a motor, which may alert the user by flashing the flash or vibrating the motor.
Step 62: an input from a user is received.
Step 62 is the same asstep 51 described previously.
And step 63: and determining the subject selected by the user to receive the sound according to the input of the user.
Step 63 may be performed in the same manner as that ofstep 52 described above when determining whether the subject selected by the user as thehost side 11 or thescreen side 12.
In addition, instep 63, when determining whether to select thehost end 11 and thescreen end 12 together as the main body of the sound collection, it can be determined by a preset specific sentence or a specific rule, for example, whether the main body of the sound collection of the user is thehost end 11 and thescreen end 12 can be determined by a specific sentence like "siri, xiana, Google Assistant, rhinoceros", and so on.
In the above determination criteria, if the user selects the main body of the sound collection as thescreen end 12, the process goes to step 64, and if the user selects the main body of the sound collection as thehost end 11, the process goes to step 65. If the user selects the main body of the sound reception to be thescreen end 12 and thehost end 11, thestep 66 is skipped.
Step 64: the speech for use is received by the second speech module and sent to thehost end 11.
Step 64 is the same asstep 53 described previously.
Step 65: the speech for is received by the first speech module. It should be noted that the first voice module is thevoice module 118 in the previous embodiment.
Step 65 is the same asstep 54 described previously.
And step 66: the first microphone or the second microphone receives voice of a user, and displays a wake-up prompt to the user through the first wake-up module and the second wake-up module after the voice is successfully received.
It should be noted that the first microphone is themicrophone 1181 and the second microphone is themicrophone 1271.
If themicrophone 1271 or themicrophone 1181 receives a specific sentence indicating that the user selects thehost end 11 and thescreen end 12 to receive the sentence together, a wake-up prompt is displayed to the user through the wake-up module.
In one particular application, the processors of thewake modules 119 and 128 are in an on state. If the user cannot find the split mobile terminal, a specific wake-up statement can be sent to the surroundings, such as: siri. If theomnidirectional microphone 1181 receives the wakeup statement, it is determined that the main body of the user selecting the radio is thehost end 11 and thescreen end 12, and the voice is sent to thecorresponding wakeup module 119, and the processor in thewakeup module 119 controls the corresponding prompt element to send a prompt to the user, and further sends the wakeup statement to thescreen end 12 through theWiFi module 111 or thebluetooth module 112, so as to send a prompt to the user through the processor and the prompt element in the wakeup module in thescreen end 12, and help the user to find the split type mobile terminal.
Similarly, if themicrophone 1271 receives the wake-up statement, the principle of the wake-up statement is the same as that of the wake-up statement received by themicrophone 1181, and the description thereof is omitted here.
Referring to fig. 7, fig. 7 is a flowchart illustrating a voice control method of a split-type mobile terminal according to an embodiment of the present application. It should be noted that the control method shown in fig. 7 is based on the split-type mobile terminal shown in fig. 4. As shown in fig. 7, the voice control method of the present embodiment includes the following steps:
step 71: an input from a user is received.
Step 71 is the same asstep 51 described previously.
Step 72: and determining the subject selected by the user to receive the sound according to the input of the user.
The manner in which step 72 is determined may be the same as that described above forstep 52.
Instep 72, if the user selects the body of the sound pickup as thescreen side 12, the process goes to step 73, and if the user selects the body of the sound pickup as thehost side 11, the process goes to step 74.
Step 73: the second microphone receives the voice of the user and sends the voice to the host end so as to amplify the voice through the first loudspeaker.
Note that the second microphone is themicrophone 1271 described above. The first speaker is thespeaker 1182 described above.
Step 74: the first microphone receives the voice of the user and sends the voice to the screen end so as to amplify the voice through the second loudspeaker.
Note that the first microphone is themicrophone 1181 described above. The second speaker is thespeaker 1272 described above.
In a particular application, short-range free speech may be implemented. Specifically, the user holds thescreen end 12 and thehost end 11 one by one. If the main body selected for the user to receive the sound is determined to be thehost end 11, themicrophone 1181 of thehost end 11 receives the sound of the user holding thehost end 11 and sends the sound to thescreen end 12, so as to make the sound through theloudspeaker 1272 of thescreen end 12. Similarly, if it is determined that the main body selected for sound reception by the user is thescreen end 12, the sound of the user holding thescreen end 12 is received by themicrophone 1271 of thescreen end 12 and transmitted to thehost end 11 to be sounded through thespeaker 1182 of thehost end 11. Telephone functions can be implemented over short distances.
In another specific application, if it is determined that the main body selected for the user to receive the sound is thescreen end 12, thescreen end 12 may be used as a handheld microphone, thehost end 11 may be used as a sound box, and the user speaks to thescreen end 12 and is emitted through theloudspeaker 1182 on thehost end 11 to amplify the sound.
It is understood that, if cost saving is considered, speakers may be provided only on thehost end 11 or thescreen end 12.
As mentioned earlier, the split-type mobile terminal of the present application can receive the voice of the user freely through thevoice module 127 or 118 with thescreen end 12, thereby facilitating the use of the user and improving the user experience.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.