Movatterモバイル変換


[0]ホーム

URL:


CN107545892B - Equipment control method, device and system - Google Patents

Equipment control method, device and system
Download PDF

Info

Publication number
CN107545892B
CN107545892BCN201610473802.3ACN201610473802ACN107545892BCN 107545892 BCN107545892 BCN 107545892BCN 201610473802 ACN201610473802 ACN 201610473802ACN 107545892 BCN107545892 BCN 107545892B
Authority
CN
China
Prior art keywords
grammar
voice
control
instruction
files
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610473802.3A
Other languages
Chinese (zh)
Other versions
CN107545892A (en
Inventor
胡书元
赵孙平
谢志华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE CorpfiledCriticalZTE Corp
Priority to CN201610473802.3ApriorityCriticalpatent/CN107545892B/en
Priority to PCT/CN2016/099469prioritypatent/WO2017219519A1/en
Publication of CN107545892ApublicationCriticalpatent/CN107545892A/en
Application grantedgrantedCritical
Publication of CN107545892BpublicationCriticalpatent/CN107545892B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention provides a device control method, a device and a system. The method comprises the following steps: updating a grammar library based on a plurality of received grammar files, wherein each grammar file carries a voice instruction for controlling external equipment corresponding to the grammar file; when voice information is received, a target voice instruction corresponding to the voice information is identified through the updated grammar library; and controlling the corresponding external equipment through the target voice command. The invention solves the technical problem of poor compatibility when the wearable equipment is used for voice control in the related technology.

Description

Equipment control method, device and system
Technical Field
The invention relates to the field of intelligent equipment, in particular to a method, a device and a system for controlling equipment.
Background
At present, wearable equipment is limited by a product form, the body type of the wearable equipment is generally small, the storage resource and the RAM resource of the wearable equipment are determined to be small due to the physical size, and the computing capability is weak, so that when the wearable equipment is used for carrying out voice control on other equipment, offline voice instructions are basically limited to 3-5 sentences and are set for a certain specific equipment, so that too few instructions are supported, the number of supported equipment is insufficient, and the user experience is influenced.
When facing to using wearable equipment to carry out speech control, its compatibility is relatively poor (the equipment type that supports is few and the voice command that supports is less), and this problem is solved to the scheme that preset APP or online identification's is generally adopted in the industry, and wearable equipment control other equipment (like other wearable equipment, intelligent house, smart car) have following several kinds of schemes:
according to the first scheme, most voice instructions of the voice-controllable electric appliance are integrated in the terminal equipment, the voice instructions are installed on the terminal equipment in an APP mode, and the voice instructions are sent to the equipment in an infrared mode, a WIFI mode and the like, so that voice control is achieved;
according to the second scheme, by means of the voice instruction set of the cloud storage device, the wearable device is connected to the cloud through the network and then performs voice control on the local device.
In the first scheme, the system resources of the device are consumed greatly and cannot be used on wearable devices with limited system resources, and in addition, if the terminal device is replaced, the APP application is required to be installed again for use, the problem of poor compatibility of the wearable devices cannot be solved; in the second scheme, when the network is disconnected, the voice control function cannot be used.
Aiming at the technical problem that in the related art, when wearable equipment is used for voice control, the compatibility is poor, and an effective solution is not provided at present.
Disclosure of Invention
The embodiment of the invention provides a device control method, a device and a system, which are used for at least solving the technical problem of poor compatibility when wearable devices are used for voice control in the related art.
According to an aspect of an embodiment of the present invention, there is provided a method of controlling a device, the method including: updating a grammar library based on a plurality of received grammar files, wherein each grammar file carries a voice instruction for controlling external equipment corresponding to the grammar file; when voice information is received, a target voice instruction corresponding to the voice information is identified through the updated grammar library; and controlling the corresponding external equipment through the target voice command.
Further, recognizing the target voice instruction corresponding to the voice information through the updated grammar library comprises: and recognizing the target voice instruction from the voice information by using the updated grammar library, and determining the external equipment requested to be controlled by the target voice instruction based on the voice information.
Further, controlling the corresponding external device through the target voice command includes: generating a control feature code corresponding to the target voice instruction, wherein the control feature code corresponds to a control instruction of the external device; and sending the control feature code to the external equipment.
Further, determining the external device to be requested to be controlled by the target voice instruction based on the voice information includes: identifying an instruction for indicating a control object from the voice information; an external device corresponding to an instruction for instructing the control object is determined.
Further, updating the grammar library based on the received plurality of grammar files includes: analyzing voice instructions carried in a plurality of grammar files; and storing the analyzed voice command into a grammar library, and compiling the grammar library.
Further, before updating the grammar library based on the received plurality of grammar files, the plurality of grammar files are received as follows: and acquiring a plurality of grammar files which are in one-to-one correspondence with the plurality of external devices from the gateway device, wherein the files acquired from any external device by the gateway device are stored when the grammar files of the gateway device establish communication connection with the gateway device for any external device.
Further, after controlling the corresponding external device by the target voice instruction, the control method further includes: voice instructions in the grammar library corresponding to the one or more received grammar files are deleted.
Further, before deleting the voice command corresponding to one or more received grammar files in the grammar library, the control method further includes: generating prompt information when the communication connection with the target external equipment is disconnected, wherein the prompt information is used for prompting whether to delete the voice instructions corresponding to one or more received grammar files; and receiving a deleting instruction or a reserving instruction, wherein the deleting instruction is used for indicating the step of executing the voice instruction for deleting one or more grammar files in the grammar library, and the reserving instruction is used for indicating the voice instruction for reserving a plurality of grammar files in the grammar library.
According to another aspect of the embodiments of the present invention, there is provided a control apparatus of an external device, the apparatus including: the system comprises an updating unit, a grammar library updating unit and a grammar library updating unit, wherein the updating unit is used for updating a grammar library based on a plurality of received grammar files, and each grammar file carries a voice instruction for controlling external equipment corresponding to the grammar file; the recognition unit is used for recognizing a target voice instruction corresponding to the voice information through the updated grammar library when the voice information is received; and the control unit is used for controlling the corresponding external equipment through the target voice instruction.
Further, the update unit includes: the parsing module is used for parsing the voice instructions carried in the plurality of grammar files; and the storage module is used for storing the analyzed voice instruction into a grammar library and compiling the grammar library.
According to another aspect of an embodiment of the present invention, there is provided a control system of an apparatus, the system including: a plurality of external devices, a gateway device and a control device, wherein: grammar files are stored on each external device, wherein each grammar file carries a voice instruction for controlling the external device; the gateway device is connected with the plurality of external devices, and the gateway device is used for sending a plurality of grammar files of the plurality of external devices stored on the gateway device to the control device when the control device is accessed so as to update a grammar library of the control device through the plurality of grammar files; and the control equipment is used for recognizing a target voice instruction corresponding to the voice information through the updated grammar library when the voice information is received, and controlling the corresponding external equipment through the target voice instruction.
According to another aspect of the embodiment of the invention, a storage medium is also provided. The storage medium may be configured to store program code for performing the steps of: updating a grammar library based on a plurality of received grammar files, wherein each grammar file carries a voice instruction for controlling external equipment corresponding to the grammar file; when voice information is received, a target voice instruction corresponding to the voice information is identified through the updated grammar library; and controlling the corresponding external equipment through the target voice command.
In the embodiment of the invention, a grammar library is updated based on a plurality of received grammar files, and each grammar file carries a voice instruction for controlling external equipment corresponding to the grammar file; when voice information is received, a target voice instruction corresponding to the voice information is identified through the updated grammar library; and then, the corresponding external equipment is controlled through the target voice instruction, and the grammar library can be updated by utilizing the grammar file of the new external equipment when the new external equipment is controlled, so that the technical problem of poor compatibility when the wearable equipment is used for voice control in the related technology is solved, and the technical effect of improving the compatibility of the wearable equipment for voice control is realized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a mobile terminal according to an embodiment of the present invention;
fig. 2 is a flowchart of a control method of an apparatus according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of an alternative voice control according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a control device of an apparatus according to an embodiment of the invention;
FIG. 5 is a schematic diagram of a syntax update according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a transport syntax file according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of establishing a connection according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a grammar update according to an embodiment of the invention;
FIG. 9 is a diagram illustrating grammar recognition and control, in accordance with an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example 1
The method provided by the first embodiment of the present application may be executed in a mobile terminal (e.g., wearable device), a computer terminal, or a similar computing device. Taking the example of the mobile terminal, as shown in fig. 1, the mobile terminal may include one or more (only one shown in the figure) processors 101 (the processors 101 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), amemory 103 for storing data, and a transmission device 105 for communication function. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device.
Thememory 103 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the control method of the device in the embodiment of the present invention, and the processor 101 executes various functional applications and data processing by running the software programs and modules stored in thememory 103, so as to implement the method described above. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the computer terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In accordance with an embodiment of the present invention, there is provided a method embodiment of a method of controlling a device, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Fig. 2 is a flowchart of a control method of an apparatus according to an embodiment of the present invention, as shown in fig. 2, the method including the steps of:
step S202, a grammar library is updated based on a plurality of received grammar files, and each grammar file carries a voice command for controlling the external equipment corresponding to the grammar file.
And step S204, when the voice information is received, identifying a target voice instruction corresponding to the voice information through the updated grammar library.
And step S206, controlling the corresponding external equipment through the target voice command.
Through the embodiment, the grammar library is updated based on the received multiple grammar files, and each grammar file carries a voice instruction for controlling the equipment corresponding to the grammar file; when voice information is received, a target voice instruction corresponding to the voice information is identified through the updated grammar library; and then, the corresponding equipment is controlled through the target voice instruction, and because the grammar library can be updated by utilizing the grammar file of the new equipment when the new equipment is controlled, the technical problem of poor compatibility when the wearable equipment is used for voice control in the related technology is solved, and the technical effect of improving the compatibility of the wearable equipment for voice control is realized.
Optionally, the executing subject of the above steps may be a control device supporting voice recognition or expanding voice recognition, including but not limited to a wearable device, a mobile phone, a tablet, and the like, and is mainly used for a device with a small resource amount (such as a storage resource, a computing resource, and the like) itself, such as the wearable device, and the wearable device is described later as an example. The external device (i.e., the controlled device) is an intelligent device (e.g., an intelligent refrigerator, an intelligent television, etc.) supporting voice control.
In the above embodiment, when the grammar library is updated based on the received multiple grammar files, the voice instructions carried in the multiple grammar files can be analyzed; and storing the analyzed voice command into a grammar library, and compiling the grammar library.
Specifically, before updating the grammar library based on the received plurality of grammar files, the plurality of grammar files are received as follows: the method comprises the steps of obtaining a plurality of grammar files corresponding to a plurality of external devices one by one from the gateway device, and saving the files obtained by the gateway device from any external device when the grammar files of the gateway device establish communication connection with the gateway device for any external device.
The above gateway device includes, but is not limited to, a wireless router, and the following describes an embodiment of the present application in detail by taking the wireless router as an example. The specific steps of the one-to-many voice control based on the home network provided by the application are as follows:
and step S11, connecting the device to be controlled (namely the controlled device) with the wireless router through the WIFI wireless network.
Step S12, after the device to be controlled is connected with the wireless router, the data transmission process is started, the home LAN accesses the intelligent household appliance (i.e. the device to be controlled), and transmits the speech grammar file in the device to the special storage device of the wireless router for storage.
Step S13, the wireless router classifies the obtained grammar files according to a predetermined format, for example, stores the voice control commands of the TV in a list named "TV", stores the voice control commands of the air conditioner in a list named "AHU", and so on.
Step S14, when the router recognizes that the wearable device with the voice recognition function is connected with the local area network, the router sends a prompt for inquiring whether voice control is needed to the wearable device, when the prompt is confirmed, the wearable device automatically copies the grammar file stored in the router to the local, compiles the grammar file to enable the grammar file to take effect, and reminds the user of finishing updating after the verification.
In step S15, after the update is completed, voice control may be performed, for example, "tv set switches to next channel", the voice command is decomposed into two parts, i.e., "wake-up word (tv)" and "switch to next channel (control command)", the wake-up word starts the voice recognition module of the corresponding device, and then command control is performed.
Step S16, after controlling the corresponding device according to the instruction of the target voice instruction, the voice instruction corresponding to one or more grammar files in the grammar library may be deleted, that is, after the wearable device disconnects and quits the network, the network quitting update process is started, the grammar file is deleted (or optionally reserved), a space is reserved for the device, and the next networking is facilitated.
Specifically, when the communication connection with the external device (i.e., the device to be controlled) is disconnected, prompt information is generated, and the prompt information is used for prompting whether to delete the voice instruction corresponding to one or more grammar files; then, the selection of the user is waited, and a deleting instruction or a reserving instruction corresponding to the selection of the user is received, wherein the deleting instruction is used for instructing the step of deleting the voice instructions corresponding to one or more grammar files in the grammar library to be executed, and the reserving instruction is used for instructing the voice instructions corresponding to the plurality of grammar files in the grammar library to be reserved.
In the embodiment, the home wireless router is used as a storage device of the grammar file, and one-to-many voice control of the wearable device can be realized through three core steps of networking update, voice wakeup and voice recognition. In the home local area network, after the equipment to be controlled is connected with the wireless router, the equipment to be controlled packages the voice grammar file and sends the voice grammar file to the wireless router, and the wireless router stores the received voice grammar file data of different equipment to be controlled in a classified mode. And after the wearable equipment is accessed to the local area network, the voice control authority is obtained through application, grammar files in the router are obtained, and one-to-many voice control is realized after compiling fusion and voice awakening. Therefore, after the wearable device enters the home local area network, the effect of voice control on the multiple devices can be achieved, compatibility between voice recognition and each intelligent device is improved, and user experience is further improved.
When the control device (i.e. wearable device) runs the method of the present application, the functions of the control device can be realized by 4 main modules, namely, a grammar file storage module, a grammar updating module, a voice wake-up module and a voice recognition module, as shown in fig. 3.
A grammar file storage module: under the condition that all external devices to be controlled are connected to a home local area network (connected to a wireless routing wireless network through a wireless communication module), the module is responsible for classifying and storing received grammar files in the devices to be controlled (including the devices to be controlled 1 to n), and the module is completed by 2 steps in a cooperation mode.
And step S21, adding an equipment identification code, packing the grammar file after the equipment to be controlled establishes wireless communication connection with the wireless router, adding the identification code of the equipment, and transmitting the equipment identification code to the wireless router, wherein the identification code of the television is 001 and that of the air conditioner is 002.
And step S22, generating a grammar list, and after receiving the grammar file data packet, the wireless router generates a corresponding storage list according to the device identification code, wherein the identification code is a list name, and the grammar is list content. For example, "001 [ tv ] [ power on, power off, channel change, … … ]", "002 [ air conditioner ] [ power on, power off, temperature up, temperature down, … … ]", and then saved in the syntax file storage module.
A grammar updating module: the module is responsible for updating the grammar library and is completed by 3 steps of cooperation.
Step S31, after the wearable device starts the voice control function, the network status check is performed to check whether the wearable device is connected to the wireless network, when the wearable device is normally connected to the home LAN, the status bit is sent to the next grammar update module, otherwise, the wearable device applies for connection to the wireless network of the wireless router, the status bit is '1' to indicate normal connection, and the status bit is '0' to indicate disconnection.
And step S32, transmitting the grammar file, when the network check is completed, starting a grammar file transmission action which can be actively initiated by the wearable device or the router, and after a transmission instruction is initiated, transmitting the grammar file list to the wearable device by the grammar file storage module of the router for updating by the grammar updating module.
And step S33, the step can be divided into two key actions of increasing and decreasing, after the updating is completed, the grammar library can increase or decrease the grammar instruction of the device to be controlled, if the network connection is normal, the grammar instruction of the device to be controlled is increased, and if the network is disconnected, the grammar instruction of the device to be controlled is deleted.
The voice awakening module: the module performs voice awakening operation on the equipment to be controlled, and the voice awakening operation is completed by 3 steps.
Step S41, device name matching, after the user inputs voice command using the microphone of the wearable device, the module matches the device name (such as tv, refrigerator, air conditioner) contained in the sentence in the grammar list to determine the object the user requests to control.
Specifically, when the target voice instruction corresponding to the voice information is identified by the updated grammar library, the external device requested to be controlled by the target voice instruction can be determined based on the voice information from among the plurality of external devices by using the updated grammar library, that is, an instruction for instructing a control object (e.g., turning on a television, switching a channel, etc.) is identified from the voice information; an external device corresponding to an instruction for instructing the control object among the plurality of external devices is determined.
Step S42, confirming the grammar command, finding out the grammar command set of the corresponding device after step S41 and transmitting the grammar command set to the voice recognition module, so that only the voice command set of the current device is in the voice recognition module.
Step S43, waking up the device to be controlled, after the actions are completed, the wearable device sends a control feature code corresponding to the voice command to the device to be controlled through the home local area network, the voice recognition module on the device to be controlled is waken up, and after the control feature code is waken up successfully, the device to be controlled feeds back the current state of the voice recognition module to the wearable device.
And the voice recognition module is used for converting the command into a control feature code (the feature code corresponds to the local command of the device to be controlled M02) after the wearable device receives the confirmation information of the device to be controlled and successfully recognizes the command through the voice recognition engine, sending the control feature code to the device to be controlled, and making a corresponding action by the device to be controlled according to the feature code.
It should be noted that, when controlling the corresponding external device according to the instruction of the target voice instruction, a control feature code corresponding to the target voice instruction may be generated, where the control feature code corresponds to the control instruction of the external device, and then the control feature code is sent to the external device to be controlled, and the control device locally reads the corresponding control instruction according to the control feature code and executes the control instruction. After the control is completed, the connection with the wireless router can be disconnected, disconnection updating is carried out (namely, the grammar updating module carries out disconnection updating), and the corresponding voice command is deleted, so that the storage space is reserved for use in the next networking.
Through the embodiment, after the wearable device is connected with the intelligent device, the grammar database is used as the control element to be transmitted through the standard interface, so that the wearable device can use more grammar instructions without increasing hardware resources, one-to-many (the same device controls a plurality of devices) or even many-to-many functions can be realized, and great convenience is brought to voice interconnection between the wearable device and the intelligent home equipment.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
The embodiment of the invention also provides a device control device. The device is used for implementing the above embodiments and preferred embodiments, and the description of the device is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 4 is a schematic diagram of a control device of an apparatus according to an embodiment of the invention. As shown in fig. 4, the apparatus may include: an updating unit 41, a recognition unit 43 and a control unit 45.
An updating unit 41, configured to update the syntax library based on the received multiple syntax files, where each syntax file carries a voice instruction for controlling an external device corresponding to the syntax file.
And a recognition unit 43, configured to recognize, when the voice information is received, a target voice instruction corresponding to the voice information through the updated grammar library.
A control unit 45 for controlling the corresponding external device by the target voice command
Through the embodiment, the updating unit updates the grammar library based on the received multiple grammar files, and each grammar file carries a voice instruction for controlling the external equipment corresponding to the grammar file; when receiving the voice information, the recognition unit recognizes a target voice instruction corresponding to the voice information through the updated grammar library; the control unit controls the corresponding external equipment through the target voice instruction, and the grammar library can be updated by utilizing the grammar file of the new external equipment when the new external equipment is controlled, so that the technical problem of poor compatibility when the wearable equipment is used for voice control in the related technology is solved, and the technical effect of improving the compatibility of the wearable equipment for voice control is realized.
Optionally, the device can be used in a control device supporting voice recognition or expandable voice recognition, including but not limited to wearable devices, mobile phones, tablets, and the like, and is mainly used for wearable devices. The controlled device is an intelligent device (such as an intelligent refrigerator, an intelligent television and the like) supporting voice control.
In the above embodiment, the update unit may include: the parsing module is used for parsing the voice instructions carried in the plurality of grammar files; and the storage module is used for storing the analyzed voice instruction into a grammar library and compiling the grammar library.
Specifically, the updating unit acquires, from the gateway device, a plurality of syntax files corresponding one-to-one to a plurality of external devices when acquiring the plurality of syntax files corresponding one-to-one to the plurality of external devices, and stores a file acquired from any one of the external devices by the gateway device when the syntax file of the gateway device establishes a communication connection with the gateway device for the any one of the external devices. Such gateway devices include, but are not limited to, wireless routers.
Through the embodiment, after the wearable device is connected with the intelligent device, the grammar database is used as the control element to be transmitted through the standard interface, so that the wearable device can use more grammar instructions without increasing hardware resources, one-to-many (the same device controls a plurality of devices) or even many-to-many functions can be realized, and great convenience is brought to voice interconnection between the wearable device and the intelligent home equipment.
Optionally, the updating unit is further configured to recognize the target voice command from the voice information using the updated grammar library, and determine the external device requested to be controlled by the target voice command based on the voice information.
Optionally, the updating unit is further configured to generate a control feature code corresponding to the target voice instruction, where the control feature code corresponds to a control instruction of the external device; and sending the control feature code to the external equipment.
Optionally, the updating unit is further configured to recognize an instruction for indicating the control object from the voice information; an external device corresponding to an instruction for instructing the control object is determined.
Optionally, the control unit is further configured to delete the voice command corresponding to the one or more received grammar files in the grammar library.
Optionally, the control unit is further configured to generate prompt information when the communication connection with the target external device is disconnected, where the prompt information is used to prompt whether to delete the voice instruction corresponding to the one or more received grammar files; and receiving a deleting instruction or a reserving instruction, wherein the deleting instruction is used for indicating the step of executing the voice instruction for deleting one or more grammar files in the grammar library, and the reserving instruction is used for indicating the voice instruction for reserving a plurality of grammar files in the grammar library.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 3
The embodiment of the invention also provides a control system of the equipment. The system comprises: a plurality of external devices, a gateway device, and a wearable device.
Each external device (i.e. controlled device) stores a grammar file, wherein each grammar file carries a voice command for controlling the external device.
And the gateway device is connected with the plurality of external devices and used for sending the plurality of grammar files of the plurality of external devices stored on the gateway device to the control device when the control device accesses, so as to update the grammar library of the control device through the plurality of grammar files.
And the control device is used for recognizing a target voice instruction corresponding to the voice information through the updated grammar library when the voice information is received, and controlling the corresponding external device through the instruction of the target voice instruction.
Such control devices include, but are not limited to, wearable devices.
With the above-described embodiment, syntax files for a plurality of external devices are saved on the gateway device, and when the control device accesses, the gateway device transmits a plurality of syntax files of a plurality of external devices stored on the gateway device to the control device to update a syntax library of the control device through the plurality of syntax files, since the grammar library can be updated with the grammar file of the new external device when the new external device is controlled, when the control device receives the voice information, the target voice instruction corresponding to the voice information is identified through the updated grammar library, and the corresponding external equipment is controlled through the indication of the target voice instruction, so that the problems that in the related art, when wearable equipment is used for voice control, the technical problem of poor compatibility is solved, and the technical effect of improving the compatibility of the wearable device for voice control is achieved.
In the above embodiments, a one-to-many voice control manner between a wearable device and an intelligent device (i.e., a device to be controlled or a controlled device) is suitable for all devices including voice recognition or expandable voice recognition. As shown in fig. 5:
step S502, transmitting a grammar file.
As shown in fig. 6, after the connection is successful, the wireless router M03 and the device to be controlled M02 start syntax update, send a control application request to control the device to be controlled, the device to be controlled pops up a dialog box (whether the dialog box content agrees to control or not) and is confirmed by the user (or can be a voice prompt), and after the agreement is confirmed, the device to be controlled adds the device identification code, packs a syntax file, and transmits the syntax file to the router. If the user does not agree with the control application and selects 'no', feeding back a refusal mark to the wireless route, judging whether to agree with the control after the wireless route receives the mark, otherwise, returning to the beginning of the process, and if so, receiving a grammar file. The wireless router stores the grammar files sent by different devices into a grammar storage list of a specific format.
Step S504, a voice recognition connection confirmation procedure, where the wearable device M01 initiates a connection to establish a connection with the smart television (i.e., a device to be controlled).
Specifically, an active connection initiation scheme may be adopted, as shown in fig. 7, the wearable device M01 initiates a request, the request determines whether a connection is required, after "yes" is selected, the smart watch initiates a connection, sends a connection request to the wireless router, and after "no" is selected, returns to the previous step or pops up the request again; and checking whether the connection is agreed by the wireless router, starting a grammar list transmission process after the connection is agreed, sending grammar files in the grammar storage list to the wearable equipment, judging whether the connection is successful by the wearable equipment, starting a grammar updating module after the connection is confirmed to be successful, receiving the grammar file list, and starting the connection again if the connection is failed.
Step S506, grammar updating step, transmitting grammar files to the wearable device through wireless routing.
The grammar updating step is a process of compiling the received grammar file, as shown in fig. 8, receiving the grammar file in the grammar storage list, judging whether the receiving is successful, if the receiving is failed, receiving again, if the receiving is successful, executing steps of grammar library updating and silent compiling, and then waiting for the voice awakening module to be awakened.
Step S5062, receiving the grammar file, starting the grammar updating step by the wearable device, sending the grammar file in the grammar storage list to the wearable device by the wireless router M03, and judging whether the receiving of the grammar file is successful by the wearable device. If the judgment result is successful, the step S5064 is entered, and a grammar library is newly obtained; if the grammar file is judged to be failed, the grammar file is received again.
Step S5064, updating the grammar library, after receiving the grammar file, the wearable device parses the grammar file to obtain a voice command, then updates the voice command in the grammar database, and performs silent compiling to enable a voice recognition function.
The above-mentioned grammar file format can adopt bnf standard format, and can also use other standardized format, here take bnf as example, and the grammar file is updated according to the key step of "grammar \ slot \ start \ here. An instruction to select a television channel is as follows:
#BNF+IAT 1.0 UTF-8;
!grammar switch_channel;
!slot<contact>;
!slot<appname>;
!slot<song>;
!start<actions>;
<actions>:<switch>
< switch > < contact > (open | select) < contact >;
CCTV | Hunan Wei View | Shaanxi Wei View | Central one ray of & lt contact & gt
In the above-mentioned instructions, it is determined that the purpose of the instruction is to switch channels (i.e., "switch _ channel"), and then it is determined that the action is to be turned on or switched (i.e., "actions >: switch >"), and the target of the turning on or switching (i.e., "contact >") may be "CCTV, south of lake satellite television, shanxi satellite television, center station", or the like. It should be noted that, in different systems or devices, the specific instruction format may be determined according to specific situations, and the present application does not limit this.
And step S508, performing voice recognition control, namely performing voice control after the wearable device fuses grammar files. The steps are mainly divided into voice awakening and voice recognition. As shown in fig. 9:
when a user speaks a single instruction, if the television turns on Hunan toilet, the wearable device exits standby, the instruction is recognized in the wearable device M01, firstly, the device identification code is judged, a wake-up instruction for voice wake-up is generated and sent to the device to be controlled M02, the device to be controlled exits standby and judges whether the different-end control is enabled, if not, standby is continued, if yes, voice recognition is started, the device to be controlled gives feedback according to a wake-up result to respond, if wake-up is successful, the wearable device gives feedback and starts voice recognition, and if wake-up is failed, standby is returned.
And identifying and judging the voice instruction, judging whether the instruction is legal, if the instruction is an instruction in '001 [ television ] [ grammar instruction set ]', judging that the instruction is legal, sending a control instruction to the equipment to be controlled, and if not, prompting a user to input the voice instruction again (if the voice prompt requests to speak again). And the equipment to be controlled identifies the action according to the control instruction and waits for the standby after finishing the action.
In the above embodiment, the merged grammar file may be removed when the network is disconnected. This step is an update to the reverse of step S506, i.e., a process of adding a grammar is changed to a process of deleting a grammar instruction. In this step, there are 2 triggering modes for syntax update, one is network link disconnection, that is, after the link is disconnected (for example, the distance exceeding the communication support, the signal is too weak, etc.), syntax update is triggered; the other is active click disconnection, and no matter the control device or the device to be controlled, if the control is clicked to cancel the control, the control mark is recovered, and meanwhile, grammar updating is triggered. After the grammar update, the user needs to be prompted that "voice control has been disconnected".
Example 4
The embodiment of the invention also provides a storage medium. Alternatively, in the present embodiment, the storage medium may be configured to store program codes for performing the following steps:
s1, updating a grammar library based on the received multiple grammar files, wherein each grammar file carries a voice instruction for controlling the external equipment corresponding to the grammar file;
s2, when the voice information is received, identifying a target voice instruction corresponding to the voice information through the updated grammar library;
and S3, controlling the corresponding external equipment through the target voice command.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s4, generating a control feature code corresponding to the target voice command, wherein the control feature code corresponds to the control command of the external device;
s5, sending the control feature code to the external device.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Optionally, in this embodiment, the processor executes, according to the program code stored in the storage medium: updating a grammar library based on a plurality of received grammar files, wherein each grammar file carries a voice instruction for controlling external equipment corresponding to the grammar file; when voice information is received, a target voice instruction corresponding to the voice information is identified through the updated grammar library; and controlling the corresponding external equipment through the target voice command.
Optionally, in this embodiment, the processor executes, according to the program code stored in the storage medium: generating a control feature code corresponding to the target voice instruction, wherein the control feature code corresponds to a control instruction of the external device; and sending the control feature code to the external equipment.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

the gateway device is configured to send, to the control device, a plurality of syntax files of the plurality of external devices stored on the gateway device when the control device accesses, in the following manner: and sending prompt information to the control device, wherein the prompt information is used for inquiring whether to control the external device by using voice, and sending a plurality of grammar files of the plurality of external devices stored on the gateway device to the control device under the condition of confirming the control, wherein the grammar file stored on the gateway device is a file acquired by the gateway device from any external device when the communication connection with the gateway device is established by any external device included in the plurality of external devices.
CN201610473802.3A2016-06-242016-06-24Equipment control method, device and systemActiveCN107545892B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN201610473802.3ACN107545892B (en)2016-06-242016-06-24Equipment control method, device and system
PCT/CN2016/099469WO2017219519A1 (en)2016-06-242016-09-20Method, apparatus, and system for controlling device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201610473802.3ACN107545892B (en)2016-06-242016-06-24Equipment control method, device and system

Publications (2)

Publication NumberPublication Date
CN107545892A CN107545892A (en)2018-01-05
CN107545892Btrue CN107545892B (en)2021-07-30

Family

ID=60783270

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201610473802.3AActiveCN107545892B (en)2016-06-242016-06-24Equipment control method, device and system

Country Status (2)

CountryLink
CN (1)CN107545892B (en)
WO (1)WO2017219519A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110164198A (en)*2018-01-252019-08-23安徽华晶微电子材料科技有限公司A kind of intelligence wearable device
CN109945406A (en)*2019-03-132019-06-28青岛海尔空调器有限总公司 Air conditioner
US11393463B2 (en)*2019-04-192022-07-19Soundhound, Inc.System and method for controlling an application using natural language communication
WO2022061293A1 (en)2020-09-212022-03-24VIDAA USA, Inc.Display apparatus and signal transmission method for display apparatus
CN112153440B (en)*2020-10-102023-04-25Vidaa美国公司Display equipment and display system
CN113223535B (en)*2021-03-222024-04-05惠州市德赛西威汽车电子股份有限公司Vehicle-mounted voice skill real-time recommendation and downloading system and method
CN113921004A (en)*2021-09-262022-01-11北京金山云网络技术有限公司Intelligent device control method and device, storage medium and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1522432A (en)*2001-07-032004-08-18Method and apparatus for improving voice recognition performance in a voice application distribution system
US7987490B2 (en)*2006-12-292011-07-26Prodea Systems, Inc.System and method to acquire, aggregate, manage, and distribute media
CN102760433A (en)*2012-07-062012-10-31广东美的制冷设备有限公司Sound control remote controller and control method of networked household appliances
CN103959374A (en)*2011-11-172014-07-30环球电子有限公司System and method for voice actuated configuration of a controlling device
CN105611033A (en)*2014-11-252016-05-25中兴通讯股份有限公司Method and device for voice control

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101989285A (en)*2009-08-072011-03-23赛微科技股份有限公司 Data query and provision method, query system, portable device and server thereof
CN102546267B (en)*2012-03-262015-06-10杭州华三通信技术有限公司Automatic configuration method of network device and management server
CN102708858A (en)*2012-06-272012-10-03厦门思德电子科技有限公司Voice bank realization voice recognition system and method based on organizing way
CN103955179A (en)*2014-04-082014-07-30小米科技有限责任公司Remote intelligent control method and device
CN104183237B (en)*2014-09-042017-10-31百度在线网络技术(北京)有限公司Method of speech processing and device for portable terminal
CN104768204A (en)*2015-03-252015-07-08广东欧珀移动通信有限公司 A network access management method, wearable device and system
CN105094807A (en)*2015-06-252015-11-25三星电子(中国)研发中心Method and device for implementing voice control

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1522432A (en)*2001-07-032004-08-18Method and apparatus for improving voice recognition performance in a voice application distribution system
US7987490B2 (en)*2006-12-292011-07-26Prodea Systems, Inc.System and method to acquire, aggregate, manage, and distribute media
CN103959374A (en)*2011-11-172014-07-30环球电子有限公司System and method for voice actuated configuration of a controlling device
CN102760433A (en)*2012-07-062012-10-31广东美的制冷设备有限公司Sound control remote controller and control method of networked household appliances
CN105611033A (en)*2014-11-252016-05-25中兴通讯股份有限公司Method and device for voice control

Also Published As

Publication numberPublication date
WO2017219519A1 (en)2017-12-28
CN107545892A (en)2018-01-05

Similar Documents

PublicationPublication DateTitle
CN107545892B (en)Equipment control method, device and system
CN108781473B (en)Method and equipment for sharing files among different terminals
CN108964994B (en)Replacement method of intelligent household equipment
CN110602692B (en)Data updating method and device and electronic equipment
CN109360564B (en)Method and device for selecting language identification mode and household appliance
CN107493212B (en)Configuration processing method, terminal, server and system of intelligent household equipment
CN111083765A (en)Method for managing intelligent equipment, intelligent router and intelligent home system
CN104635501A (en)Intelligent home control method and system
CN107454657B (en)Voice network distribution method
CN112612497A (en)Firmware upgrading method based on gateway and firmware upgrading method of equipment
CN105515853A (en)Wireless network node and state update method thereof
US20160132029A1 (en)Method for configuring and controlling smart home products
CN103580921A (en)Automatic network equipment upgrading method and automatic network equipment upgrading system
KR20150067090A (en)Multi-screen interaction method, devices, and system
CN113516980B (en)Scene linkage method, construction method and device of scene linkage system
CN112581959A (en)Intelligent device control method and system and voice server
CN110850736A (en)Control method and system
CN106648721A (en)Method and device for upgrading software
CN110876117A (en)Method and device for recovering lost connection of terminal
CN113132958A (en)Fusion networking method, device, system and computer readable storage medium
EP3015990B1 (en)Information processing device, and destination information updating method and program
CN111105789A (en)Awakening word obtaining method and device
CN106781378A (en)Information matching method, information configuration method of remote controller and corresponding devices
US11284476B1 (en)Systems and methods for commissioning nodes of a wireless network
CN113096668B (en) A method and device for building a collaborative voice interaction engine cluster

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp