Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
The following is an introduction and explanation of several terms involved in the present application:
NLP, english is known as Natural Language Processing, chinese meaning is natural language processing, aimed at enabling machines to understand unstructured types of data, such as text.
TTS, english, collectively called Text To Speech, meaning Text To Speech, is used as part of a man-machine conversation To intelligently convert Text into a Speech stream so that the machine can speak.
Voiceprint recognition, english translation to Voiceprint Recognition, by extracting the speaker's voice features, i.e., voiceprints, to automatically recognize the speaker's identity. That is, voiceprint is a non-contact biological feature, and has identity uniqueness, which is to say, can uniquely determine the identity of a speaker by speaking a sentence, as well as human biological features such as facial features, iris, fingerprint, finger vein, palm print, and the like.
As described above, in the process of controlling the smart device by voice, the user needs to install the client associated with the smart device in advance, and also needs to configure the voice command in advance in the running client.
For example, as a client associated with the intelligent device runs, if a user configures a voice command "the temperature of the living room air conditioner is increased by 3 degrees" in advance in the running client, and stores the voice command in the command library, when the user feels that the living room is somewhat cold, the user can send out the voice command "the temperature of the living room air conditioner is increased by 3 degrees", at this time, after the voice command is stored in the command library in the background, the background can further determine that the target device is the living room air conditioner according to the voice command, and set the operation to be 3 degrees for the temperature increase, so as to control the living room air conditioner to increase the temperature by 3 degrees.
In the above process, on one hand, as the number of intelligent devices increases, the number of voice commands to be preconfigured increases, which makes the user operation more and more complicated.
On the other hand, for a user who is unfamiliar with the smart device or the client associated therewith, such as a guest who goes to a host's home for a guest, it may be completely unclear how the voice instruction should be configured, and thus the user experience is affected because the user cannot enjoy the convenience of the smart home at all.
From the above, it is known how to simplify the control of the smart device.
Therefore, the intelligent device control method provided by the application can effectively simplify the control of the intelligent device, and is correspondingly suitable for an intelligent device control device which can be deployed in electronic equipment, for example, the electronic equipment can be a user terminal, an intelligent device provided with a voice acquisition module, a gateway, a server and the like.
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment related to a smart device control method. The implementation environment includes user terminal 110, intelligent device 130, gateway 150, server side 170, and router 190.
Specifically, the user terminal 110 may be considered as a user terminal or a terminal, and the deployment of the client associated with the smart device 130 may be performed, and the user terminal 110 may be an electronic device such as a smart phone, a tablet computer, a notebook computer, or a desktop computer, which is not limited herein.
The client is associated with the smart device 130, so that when the client is running in the user terminal 110, relevant functions such as smart device control can be provided for the user, where the client may be in the form of an application program, or may be in the form of a web page, and accordingly, a user interface displayed by the client may be in the form of a program window, or may be in the form of a web page, which is not limited herein.
The intelligent device 130 is disposed in the gateway 150 and communicates with the gateway 150 through its own configured communication module, and is further controlled by the gateway 150. In one application scenario, intelligent device 130 accesses gateway 150 via a local area network, thereby being deployed in gateway 150. The process of intelligent device 130 accessing gateway 150 through the local area network includes first establishing a local area network by gateway 150 and intelligent device 130 joining the local area network established by gateway 150 by connecting to gateway 150. Such local area networks include, but are not limited to, ZIGBEE or Bluetooth. The intelligent device 130 may be an intelligent printer, an intelligent fax machine, an intelligent camera, an intelligent air conditioner, an intelligent door lock, an intelligent lamp, or an electronic device such as a human body sensor, a door and window sensor, a temperature and humidity sensor, a water immersion sensor, a natural gas alarm, a smoke alarm, a wall switch, a wall socket, a wireless switch, a wireless wall-mounted switch, a magic cube controller, a curtain motor, an intelligent sound box, etc. which are configured with a communication module.
Interaction between user terminal 110 and intelligent device 130 may be accomplished through a local area network, or through a wide area network. In an application scenario, the ue 110 establishes a communication connection between the router 190 and the gateway 150 in a wired or wireless manner, for example, including but not limited to WIFI, so that the ue 110 and the gateway 150 are disposed in the same local area network, and further the ue 110 may implement interaction with the smart device 130 through a local area network path. In another application scenario, the ue 110 establishes a wired or wireless communication connection between the server 170 and the gateway 150, for example, but not limited to, 2G, 3G, 4G, 5G, WIFI, etc., so that the ue 110 and the gateway 150 are deployed in the same wide area network, and further the ue 110 may implement interaction with the smart device 130 through a wide area network path.
The server 170 may be considered as a cloud, a cloud platform, a server, etc., where the server 170 may be a server, a server cluster formed by a plurality of servers, or a cloud computing center formed by a plurality of servers, so as to better provide background services to a large number of user terminals 110. For example, the background services include smart device control services.
In one application scenario, the intelligent device 130 is configured with a voice collection module to collect the words of the target object to form a corpus, and identify the target object according to the corpus, so as to determine the object type of the target object, and further notify the gateway 150 where the intelligent device 130 is deployed.
After receiving the corpus, the gateway 150 can obtain device control data matched with the object type based on the corpus, so as to control the target device to execute the setting operation according to the device control data.
Further, after the target device performs the setting operation, as shown in fig. 7, the gateway 150 may further perform steps 410 to 430, specifically, receive an execution result returned by the target device when performing the setting operation, so as to generate reply data corresponding to the corpus according to the execution result, and send the reply data to the intelligent device 130, so as to convert the reply data corresponding to the corpus into speech and feed back the speech to the target object through the TTS technology provided by the speech output module configured by the intelligent device 130.
Therefore, a voice interaction mode 'corpus-reply data' in the intelligent home scene is formed, so that the control of the intelligent equipment can be simply and efficiently realized, and the use experience of a user is improved.
Referring to fig. 2, an embodiment of the present application provides a method for controlling an intelligent device, which is applicable to an electronic device, for example, the electronic device may be the user terminal 110, the intelligent device 130 configured with a voice acquisition module, the gateway 150, the server, etc. in the implementation environment shown in fig. 1.
In the following method embodiments, for convenience of description, the execution subject of each step of the method is described as an electronic device, but this configuration is not particularly limited.
As shown in fig. 2, the method may include the steps of:
in step 310, a corpus of target objects is obtained.
The corpus is used for representing the service requested by the target object.
The first description is that the target object refers to an object forming a corpus, and specifically may refer to an object requesting a service through the corpus. For example, the object may specifically be at least one of a person (e.g., host, guest) or a robot, etc., that may be present in the smart home scenario. In one embodiment, the services include an information query service and an environment control service, where the information query service refers to a request for querying information related to an environment where a target object is located, and the environment control service refers to a request for controlling the intelligent device to perform a setting operation to improve a feeling of the target object to the environment where the target object is located, and of course, the service types of the services may also be considered to include the information query service and the environment control service.
Second, the corpus is distinguished from the speech instructions and may be considered as a sentence that the target object speaks for the environment, e.g., the corpus is a sentence that the target object expects to improve its perception of the environment, or the corpus is a sentence that the target object speaks for the information about the environment that it expects to query, rather than a sentence that the target object indicates which smart device performs what operation.
That is, the corpus reflects the actual requirement of the target object, and it can be understood that the corpus is an accurate description of the service requested by the target object, for example, if the target object desires to inquire about the electricity consumption of the month, the corpus of the target object can be "how much the electricity consumption of the month is", so as to indicate that the target object requests the information inquiry service, and if the target object feels that the living room is not enough, the corpus of the target object can be "whether the living room is bright or not so as to indicate that the target object requests the environment control service.
The corpus is acquired by utilizing a voice acquisition module configured by the electronic equipment to acquire the speaking words of the target object. For example, the electronic device may be the user terminal 110 in the implementation environment shown in fig. 1, and the microphone configured by the user terminal 110 may be regarded as a voice acquisition module.
And 330, identifying the target object according to the corpus, and determining the object type of the target object.
Wherein the object type is used to identify the identity of the object of the same class. In one embodiment, object types include host type, guest type, and division by affinity. For example, the host type target object may be a family member (may also be considered as a host), the guest type target object may be a non-family member (may also be considered as a guest), or the host type target object may be a user terminal holder, i.e., the user itself, and the guest type target object may be another object other than the user, or the host type target object may be an object granted device control authority, and the guest type target object may be an object not granted device control authority. Therefore, compared with a guest type target object, the method has the advantages that the daily habit of a host type target object with higher intimacy is known more fully, the comfort of the host in the environment control service is improved, and the privacy protection is stricter, so that the privacy of the host in the information query service is guaranteed.
Of course, in other embodiments, the object types of the different objects may be further divided according to actual needs of the application scenario, for example, according to ages, the object types include an elderly type, a teenager type, a middle aged type, or according to sexes, the object types include a female type, a male type, which are not particularly limited herein.
In this embodiment, identification is also considered as identification of the object type by voiceprint identification.
Voiceprints refer to sound features that are used to indicate the identity of a speaker and may also be considered to uniquely identify the speaker. It will be appreciated that the voice characteristics will differ from speaker to speaker and the voiceprint will differ from one voiceprint to another, so that the identity of the speaker can be uniquely determined by the voiceprint. Based on this, the voiceprint of the target object uniquely identifies the identity of the target object, i.e., uniquely identifies the object type of the target object.
FIG. 3 illustrates a flow diagram for voiceprint recognition, which in one embodiment includes feature extraction 301 and pattern matching 302, as shown in FIG. 3. In another embodiment, the voiceprint recognition further includes preprocessing 303 prior to feature extraction 301. Wherein the preprocessing includes endpoint detection including silence detection VAD (Voice Activity Detection) and noise cancellation including voice quality detection, effective audio extraction.
Here, it is explained that the object type of the target object can be determined by voiceprint recognition of the corpus of the target object, and the target object is first required to perform voiceprint registration, as shown in fig. 3. Here, voiceprint registration refers to that a target object collects corresponding corpus by using a voice collection module, and voiceprints of the target object are recorded to form voiceprints of the target object, so that an association relationship between the object type and the voiceprints of the target object is established in a voiceprint library, and a basis is provided for pattern matching in voiceprint recognition. For example, in the voiceprint library, an association relationship between the type of the object being the owner type and voiceprints of a plurality of owners is established, and if the voiceprints of the target object cannot be matched with the voiceprints in the voiceprint library, the type of the target object can be identified as the guest type, that is, the target object is a guest.
Step 350, acquiring device control data matched with the object type based on the corpus.
The device control data is used for indicating the target device to execute the setting operation. Specifically, the device control data is related to a service requested by the target object, and if the service requested by the target object is an environment control service, the device control data is used for instructing the target device to execute a setting operation capable of improving the feeling of the target object on the environment where the target object is located, and if the service requested by the target object is an information inquiry service, the device control data is used for instructing the target device to execute a device state inquiry operation/a query prohibition operation according to information which the target object desires to inquire.
Referring to fig. 4, in one exemplary embodiment, step 350 may include the steps of:
And 351, extracting keywords from the corpus to obtain keywords in the corpus.
In one embodiment, the keyword extraction algorithm includes at least supervised, semi-supervised, and unsupervised keyword extraction. The unsupervised keyword extraction further comprises keyword extraction based on statistical characteristics, keyword extraction based on a word graph model, keyword extraction based on a topic model and the like.
For example, in natural language processing, keyword extraction includes text preprocessing, text representation, and text analysis. The text preprocessing is used for decomposing the corpus into words/words, the text representation is used for converting the words/words in the corpus into feature vectors, and the text analysis is to obtain a recognition result, namely keywords in the corpus, by utilizing a machine learning model through feature vector prediction.
In step 353, the target device and the setting operation related to the keyword are obtained according to the service type and/or the object type of the service.
As described above, the service types of the service include an environment control service and an information query service, and the object types in the privacy protection class at least include an owner type and a guest type, and correspondingly, according to the service type and/or the object type of the service, a set service execution mode is determined, so as to enter the set service execution mode to obtain the target device and the set operation related to the keyword. In one embodiment, the service execution mode includes a query mode, a control mode, and in one embodiment, the service execution mode includes a master query mode, a guest query mode, a master control mode, and a guest control mode.
In one embodiment, the business execution mode refers to determining keyword related target devices and setting operations based on historical behaviors occurring in the smart home scenario, in one embodiment, the business execution mode refers to determining keyword related target devices and setting operations based on priorities of the smart devices, and in one embodiment, the business execution mode refers to determining keyword related target devices and setting operations based on information to be queried.
For example, if the corpus of the target object is "whether to let the living room be hot at a point", the keywords in the corpus at least include "living room, hot", and after entering the set service execution mode, the target device can be determined to be a living room air conditioner, and the setting operation is to raise the temperature.
If the corpus of the target object is "yesternight goes home", keywords in the corpus at least comprise "yesternight and go home", after entering a set business execution mode, the target device can be determined to be a switch connected with a lamp, and the setting operation is an operation of inquiring yesternight home time.
Step 355, obtaining device control data according to the target device and the setting operation.
Still referring to the foregoing example, if the target device is a living room air conditioner, the setting operation is to set the temperature up, the device control data is used to instruct the living room air conditioner to set the temperature up, that is, the target object expects the air conditioner to set the temperature up, and if the target device is a switch connected to a lamp, the setting operation is to inquire about the last night time of going home operation, the device control data is used to instruct the switch connected to the lamp to inquire about the click state, that is, the target object expects the last night time of the last time of click state operation.
Therefore, aiming at different target objects, the setting operation executed by the intelligent equipment is also different, and the party can better meet the service requested by the different target objects. For example, if the service requested by the target object is an information query service, if the object type of the target object is a host type, the intelligent device performs an operation of querying information to be queried, and if the object type of the target object is a guest type, the intelligent device performs an operation of prohibiting querying information to be queried, thereby protecting the privacy of the host.
In step 370, the target device is controlled to perform a setting operation according to the device control data.
After the device control data is obtained, the control of the target device can be performed based on the device control data, so as to satisfy the service requested by the target object.
For example, if the target object feels cold in the living room, the obtained device control data is used for indicating the living room air conditioner to raise the temperature by 3 degrees, then the living room air conditioner will perform a setting operation of raising the temperature by 3 degrees according to the indication of the device control data, thereby ensuring that the target object feels warm in the living room, if the target object desires to inquire about the last night time of coming home, the obtained device control data is used for indicating the intelligent door lock to perform an operation of inquiring about the last night time of coming home, then the intelligent door lock will perform an operation of inquiring about the last time of being in a door opening state last night according to the indication of the device control data, thereby enabling the target object to know the last night time of coming home.
Through the process, the target object only needs to express the actual demand through the corpus, and the client associated with the intelligent equipment is not required to be installed in advance, and corresponding voice instructions are not required to be configured in the running client in advance, so that the problem that the intelligent equipment in the related technology is complicated to control can be effectively solved, and the use experience of a user is improved.
The different set service execution modes for the various service types/object types will now be described in detail with reference to fig. 5 and 6 as follows:
As shown in fig. 5, in an exemplary embodiment, if the service type is an environment control service, the step 353 may include the steps of:
Step 410, mapping the device attribute of the keyword to obtain the target device attribute associated with the keyword.
The device attribute is used for indicating an attribute of an object monitored by the intelligent device, and specifically may be an attribute indicating an environment monitored by the intelligent device. For example, the device attribute may be temperature, brightness, humidity, volume, etc.
In this embodiment, the mapping of the device attribute is implemented through the association relationship between the keyword and the device attribute.
In the foregoing example, if the corpus of the target object is "whether or not the living room is heated a little", the keywords in the corpus include at least "living room, heat".
Assuming that the device attributes include temperature, brightness, volume, etc., the keywords associated with temperature include weather, air, room, here, cold, hot, the keywords associated with brightness include room, here, bright, dark, and the keywords associated with volume include room, here, noisy.
Then, by means of the device attribute mapping, according to the keyword "hot" in the corpus "whether the living room is hot at one point", it is possible to determine that the device attribute having an association relationship with the keyword is temperature, and use the temperature as the target device attribute.
Step 430, searching for the target device having the device attribute matching the target device attribute, and obtaining the setting operation corresponding to the target device.
Specifically, the determination process of the target device and the setting operation may include the steps of:
if the object type is the owner type, go to step 431 to step 433.
Step 431, obtain historical behavior data.
The historical behavior data is used for indicating the candidate object to control the first candidate device to execute the first operation.
First, the object type of the candidate object is the same as the object type of the target object. It will be appreciated that a person appearing in the smart home scenario may be a host or a guest, and that the host may be more than one, for example, the host may be dad, mom, grandpa, milk, etc., where the object types of dad, mom, grandpa, milk are all host types, and based on this, if the target object is dad, the candidate object may be dad, mom, grandpa, milk, i.e. the object type is the same as the object type of the target object. Of course, in other application scenarios, the host may also refer to a specific person, such as the holder of the user terminal, which is not limiting herein.
Second, the device attributes of the first candidate device match the target device attributes. The intelligent equipment with the equipment attribute can be an intelligent lamp, a switch, a curtain motor and the like by taking the equipment attribute as a brightness example, and the intelligent equipment with the equipment attribute can be an intelligent sound box, an intelligent television and the like by taking the equipment attribute as a volume example.
According to the method, the object with the same object type as the target object can be used as a candidate object, and the intelligent device with the device attribute matched with the target device attribute can be used as a first candidate device, so that historical behavior data are greatly enriched, historical behaviors of the target objects with different object types are mastered, and the intelligent device can be controlled better according to daily habits of the target objects with different object types.
Step 432, selecting a first candidate device meeting the first setting condition as the target device according to the historical behavior data.
In one possible embodiment, the first setting condition refers to the first candidate device that is controlled to perform the first operation the largest number of times. In one possible embodiment, the first setting condition refers to the first candidate device being controlled to perform the first operation more than a set threshold. In one possible embodiment, the first setting condition refers to the first candidate device controlled to perform the first operation the largest number of times within the set time. In one possible embodiment, the first setting condition refers to the first candidate device being controlled to perform the first operation more than a set threshold number of times within a set time. The setting threshold and the setting time can be flexibly set according to the actual needs of the application scene, which is not limited herein.
Step 433, the first operation performed by the target device is taken as a setting operation.
For example, assuming the corpus "living room is somewhat cold" for the target object, historical behavior data a, b, c is obtained. The historical behavior data a is used for indicating that the target object controls the living room air conditioner to raise the temperature by 3 degrees, the historical behavior data b is used for indicating that the target object controls the living room air conditioner to raise the temperature by 3 degrees, and the historical behavior data c is used for indicating that the target object controls the living room electric heater to adjust from 1 gear to 2 gears.
Based on the above, according to the indication of the historical behavior data, the number of times that the temperature of the living room air conditioner is controlled to be increased by 3 degrees is more than the number of times that the living room electric heater is adjusted from 1 gear to 2 gears, so that the target equipment can be determined to be the living room air conditioner, and the temperature is set to be increased by 3 degrees.
Of course, in other embodiments, if there is also historical behavior data for the guest type target object, steps 431 through 433 may still be performed to obtain device control data, which is not particularly limited herein.
However, in some embodiments, there is a high likelihood that there is no historical behavior data for the target object of the guest type, then if the object type is a guest type, steps 434-436 are performed.
At step 434, at least one second candidate device having device attributes that match the target device attributes is selected.
And step 435, selecting a second candidate device meeting the second setting condition from the at least one second candidate device as the target device.
In one possible embodiment, the second setting condition refers to the second candidate device with the highest priority. In one possible embodiment, the second set condition refers to the second candidate device having a priority level exceeding the set priority level. The setting priority may be flexibly set according to the actual requirement of the application scenario, which is not limited herein.
And step 436, performing operation mapping on the target device to obtain a second operation associated with the target device as a setting operation.
In this embodiment, the mapping of the selecting and setting operations of the target device is implemented by establishing an association relationship between the intelligent device (including the device attribute, the priority, etc.) and the second operation. The association relationship may be stored in a queue, an array, a database table, or the like, which is not limited herein.
The intelligent device with the device attribute as the temperature is illustrated by taking the association relationship established by the database table as an example, as shown in the following table 1:
Table 1 association relationship between smart device and second operation
Based on the above table 1, if the corpus of the target object is "cold in living room", based on the keyword "cold" in the corpus, the attribute of the target device can be determined to be the temperature, and further, according to the table 1, the second candidate devices include a living room air conditioning partner, a living room air conditioner and a living room electric heater, further, based on the priorities of the second candidate devices, the second candidate device with the highest priority (1) is determined to be the target device, namely the living room air conditioner, and the second operation associated with the target device is determined to be the setting operation, namely the height is adjusted by 3 degrees.
As shown in fig. 6, in an exemplary embodiment, if the service type is an information query service, the step 353 may include the steps of:
and 510, positioning the information to be queried in the information set according to the keywords to obtain the information to be queried.
The query information positioning refers to searching information to be queried in an information set according to keywords. In one possible implementation, the information set stores a plurality of information to be queried, and an association relationship between each information to be queried and the keyword is stored. The information set may be represented by a queue, an array, a database table, etc., which is not limited herein.
In step 530, the intelligent device associated with the information to be queried is determined as the target device.
In this embodiment, the determination of the target device is achieved by establishing an association relationship between the intelligent device and the information to be queried. The association relationship may be stored in a queue, an array, a database table, or the like, which is not limited herein.
The intelligent device with the device attribute as the temperature is illustrated by taking the association relationship established by the database table as an example, as shown in the following table 2:
TABLE 2 incidence relation between information to be queried and keywords and intelligent devices
Based on the above table 2, if the corpus of the target object is "yesternight returns home", based on the keywords "yesternight, whether the corpus returns home", it can determine that the information to be queried is the home returning time, and determine that the intelligent device associated with the information to be queried is the target device, that is, the intelligent door lock, and then inform the intelligent door lock to execute the operation of querying the last time of yesternight in the door opening state/prohibit the operation of querying the last time of yesternight in the door opening state.
Step 550 determines if the object type is a guest type.
The inventors have realized that in order to preserve the privacy of the host, the information to be queried is not always allowed to be queried for target objects of different object types. For example, information to be queried concerning the privacy of the host does not allow for the querying of guest-type target objects. Therefore, before determining the target device and the setting operation related to the keyword, it is required to determine whether the object type of the target object is the guest type, and further determine whether the information to be queried allows the query.
Specifically, if the object type of the target object is the host type, step 570 is performed, whereas if the object type of the target object is the guest type, step 560 is performed.
Step 560, it is determined whether the information to be queried can be queried.
Specifically, if the information to be queried belongs to the information which can be queried, step 570 is executed, otherwise, if the information to be queried belongs to the information which cannot be queried, step 590 is executed.
It should be noted that the execution sequence of steps 530 to 560 is not limited to the present embodiment, and in other embodiments, step 550 may be executed first, and then step 530 may be executed, which is not limited herein.
In step 570, the operation of querying the information to be queried is taken as a setting operation.
The operation of inquiring the information to be inquired is used for indicating the target equipment to inquire the equipment state related to the information to be inquired.
It should be noted that, for the gateway, after the target device reports and inquires the device status related to the information to be inquired (including but not limited to status change time, electricity consumption, online status, offline status, etc.), not only reply data of the information to be inquired may be determined based on the received device status itself, but also statistical processing may be performed based on the received device status, for example, the statistical processing includes summation operation, averaging, taking maximum/minimum value, etc. to determine the reply data of the information to be inquired, which is not limited herein, so that the information inquiry service can be greatly enriched, and the actual requirement of the user about the information inquiry can be better satisfied.
In step 590, the operation of prohibiting the inquiry of the information to be inquired is taken as the setting operation.
That is, if the information (information to be queried) that the target object desires to query belongs to the information that cannot be queried, for example, the information to be queried is the number of family members, the living habits of the family members (such as average time to return home), then the information to be queried is regarded as the owner privacy, the target object with the object type of the guest type prohibits the query, and the security of the owner privacy can be fully ensured by the party.
Through the cooperation of the above embodiment, the object types are different for different corpora of different target objects, and the service types are different, so that the method for acquiring the equipment control data under different service execution modes is realized, the method is not only suitable for the target objects (such as owners) of the owners, namely, intelligent equipment is controlled according to the historical behaviors of the owners, so that the environment improvement is carried out according to the daily habits of the owners, the comfort of the owners is fully ensured, the method is also suitable for the target objects (such as guests) of the guests, and even if the guests are unfamiliar with the intelligent equipment in the intelligent home scene, the user can fully enjoy the convenience brought by the intelligent home without any complicated operation, thereby being beneficial to improving the use experience of users.
Referring to fig. 7, in one possible implementation manner, after step 390, the method may further include the following steps:
step 610, receiving an execution result returned by the target device when executing the setting operation.
Step 630, generating reply data corresponding to the corpus according to the execution result.
For example, if the corpus of the target object is "cold in living room", after the living room air conditioner increases the temperature by 3 degrees, the gateway may generate reply data "determined" corresponding to the corpus, and notify the smart speaker to broadcast the reply data "determined".
Or the corpus of the target object is "last night point home", that is, the time that the information to be queried is last night home is determined, then after the intelligent door lock queries and reports the last time the information is in the door opening state last night, the gateway can generate reply data of the information to be queried according to the time, that is, reply data "last night 9 point home" corresponding to the corpus, and notify the intelligent equipment to report the reply data "last night 9 point home".
Therefore, a voice interaction mode 'corpus-reply data' in the intelligent home scene is formed, and a user can simply and efficiently control the intelligent equipment, namely, only a sentence is required to be spoken for the environment, which intelligent equipment should be controlled to execute which operation is not required to be concerned at all, so that the use experience of the user is improved.
The following is an embodiment of the apparatus of the present application, which may be used to execute the intelligent device control method related to the present application. For details not disclosed in the embodiment of the apparatus of the present application, please refer to a method embodiment of the intelligent device control method related to the present application.
Referring to fig. 8, an intelligent device control apparatus 900 is provided in an embodiment of the present application, which includes, but is not limited to, a corpus acquisition module 910, an identity recognition module 930, a data acquisition module 950, and a device control module 970.
The corpus acquisition module 910 is configured to acquire a corpus of the target object, where the corpus is used to represent a service requested by the target object.
The identity recognition module 930 is configured to identify the target object according to the corpus, and determine an object type of the target object.
The data acquisition module 950 is configured to acquire, based on the corpus, device control data that matches the object type, where the device control data is used to instruct the target device to perform the setting operation.
The device control module 970 is configured to control the target device to perform a setting operation according to the device control data, so as to complete the service requested by the target object.
It should be noted that, when the intelligent device control apparatus provided in the foregoing embodiment performs intelligent device control, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the intelligent device control apparatus is divided into different functional modules, so as to complete all or part of the functions described above.
In addition, the intelligent device control apparatus and the intelligent device control method provided in the foregoing embodiments belong to the same concept, and the specific manner in which each module performs the operation has been described in detail in the method embodiment, which is not described herein again.
Fig. 9 shows a schematic structure of an electronic device according to an exemplary embodiment. The electronic device is suitable for use in the gateway 150, server, etc. shown in the implementation environment of fig. 1.
It should be noted that the electronic device is only an example adapted to the present application, and should not be construed as providing any limitation on the scope of use of the present application. Nor should the electronic device be construed as necessarily relying on or necessarily having one or more of the components of the exemplary electronic device 2000 illustrated in fig. 9.
The hardware configuration of the electronic device 2000 may vary widely depending on configuration or performance, as shown in fig. 9, the electronic device 2000 includes a power supply 210, an interface 230, at least one memory 250, and at least one central processing unit (CPU, central Processing Units) 270.
Specifically, the power supply 210 is configured to provide an operating voltage for each hardware device on the electronic device 2000.
Interface 230 includes at least one wired or wireless network interface for interacting with external devices. For example, interactions between user terminal 110 and gateway 150 in the implementation environment shown in FIG. 1 are performed.
Of course, in other examples of the adaptation of the present application, the interface 230 may further include at least one serial-parallel conversion interface 233, at least one input-output interface 235, at least one USB interface 237, and the like, as shown in fig. 9, which is not particularly limited herein.
The memory 250 may be a carrier for storing resources, such as a read-only memory, a random access memory, a magnetic disk, or an optical disk, where the resources stored include an operating system 251, application programs 253, and data 255, and the storage mode may be transient storage or permanent storage.
The operating system 251 is used for managing and controlling various hardware devices and applications 253 on the electronic device 2000, so as to implement the operation and processing of the cpu 270 on the mass data 255 in the memory 250, which may be Windows server, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc.
The application 253 is a computer program that performs at least one specific task based on the operating system 251, and may include at least one module (not shown in fig. 9), each of which may respectively include a computer program for the electronic device 2000. For example, the smart device control apparatus may be seen as an application 253 deployed on the electronic device 2000.
The data 255 may be photographs, pictures, etc. stored in the disk, but may also be corpus, historical behavior data, various rules, etc. stored in the memory 250.
The central processor 270 may include one or more processors and is configured to communicate with the memory 250 via at least one communication bus to read the computer program stored in the memory 250, thereby implementing the operation and processing of the bulk data 255 in the memory 250. The smart device control method is accomplished, for example, by the central processor 270 reading a series of computer programs stored in the memory 250.
Furthermore, the present application can be realized by hardware circuitry or by a combination of hardware circuitry and software, and thus, the implementation of the present application is not limited to any specific hardware circuitry, software, or combination of the two.
Referring to fig. 10, fig. 10 is a schematic diagram illustrating another electronic device according to an exemplary embodiment. The electronic device is suitable for use in the user terminal 110, etc. in the implementation environment shown in fig. 1.
It should be noted that the electronic device is only an example adapted to the present application, and should not be construed as providing any limitation on the scope of use of the present application. Nor should the electronic device be construed as necessarily relying on or necessarily having one or more of the components of the exemplary electronic device 1100 shown in fig. 10.
As shown in fig. 10, the electronic device 1100 includes a memory 101, a memory controller 103, one or more (only one is shown in fig. 10) processors 105, a peripheral interface 107, a radio frequency module 109, a positioning module 111, a camera module 113, an audio module 115, a touch screen 117, and a key module 119. These components communicate with each other via one or more communication buses/signal lines 121.
The memory 101 may be used to store a computer program and a module, such as a computer program and a module corresponding to the smart device control method and apparatus in the exemplary embodiment of the present application, and the processor 105 executes the computer program stored in the memory 101 to perform various functions and data processing, that is, to complete the smart device control method.
Memory 101, which is the carrier of resource storage, may be random access memory, e.g., high speed random access memory, non-volatile memory, such as one or more magnetic storage devices, flash memory, or other solid state memory. The storage means may be a temporary storage or a permanent storage.
The peripheral interface 107 may include at least one wired or wireless network interface, at least one serial-to-parallel conversion interface, at least one input/output interface, at least one USB interface, etc. for coupling external various input/output devices to the memory 101 and the processor 105 to enable communication with the external various input/output devices.
The radio frequency module 109 is configured to receive and transmit electromagnetic waves, and to implement mutual conversion between the electromagnetic waves and the electrical signals, so as to communicate with other devices through a communication network. The communication network may include a cellular telephone network, a wireless local area network, or a metropolitan area network, and may employ various communication standards, protocols, and techniques.
The positioning module 111 is configured to obtain a current geographic location of the electronic device 1100. Examples of the positioning module 111 include, but are not limited to, global satellite positioning system (GPS), wireless local area network or mobile communication network based positioning technology.
The camera module 113 is attached to a camera for taking pictures or videos. The photographed pictures or videos may be stored in the memory 101, and may also be transmitted to an upper computer through the rf module 109.
The audio module 115 provides an audio interface to the user, which may include one or more microphone interfaces, one or more speaker interfaces, and one or more earphone interfaces. The interaction of the audio data with other devices is performed through the audio interface. The audio data may be stored in the memory 101 or may be transmitted via the radio frequency module 109.
The touch screen 117 provides an input-output interface between the electronic device 1100 and a user. Specifically, the user may perform an input operation, such as a gesture operation of clicking, touching, sliding, or the like, through the touch screen 117 to cause the electronic device 1100 to respond to the input operation. The electronic device 1100 displays and outputs the output content formed by any one form or combination of text, pictures or videos to the user through the touch screen 117.
The key module 119 includes at least one key to provide an interface for a user to input to the electronic device 1100, which the user can cause the electronic device 1100 to perform different functions by pressing different keys. For example, the sound adjustment key may allow a user to adjust the volume of sound played by the electronic device 1100.
It is to be understood that the configuration shown in fig. 10 is merely illustrative and that electronic device 1100 may also include more or fewer components than shown in fig. 10 or have different components than shown in fig. 10. The components shown in fig. 10 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 11, an electronic device 4000 is provided in an embodiment of the present application, and the electronic device 400 may include a user terminal, an intelligent device configured with a voice acquisition module, a gateway, a server, and so on.
In fig. 11, the electronic device 4000 includes at least one processor 4001, at least one communication bus 4002, and at least one memory 4003.
Wherein the processor 4001 is coupled to the memory 4003, such as via a communication bus 4002. Optionally, the electronic device 4000 may further comprise a transceiver 4004, the transceiver 4004 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data, etc. It should be noted that, in practical applications, the transceiver 4004 is not limited to one, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The Processor 4001 may be a CPU (Central Processing Unit ), general purpose Processor, DSP (DIGITAL SIGNAL Processor, data signal Processor), ASIC (Application SPECIFIC INTEGRATED Circuit), FPGA (Field Programmable GATE ARRAY ) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. The processor 4001 may also be a combination that implements computing functionality, e.g., comprising one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
The communication bus 4002 may include a pathway to transfer information between the aforementioned components. The communication bus 4002 may be a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus or the like. The communication bus 4002 can be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 11, but not only one bus or one type of bus.
Memory 4003 may be, but is not limited to, ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, EEPROM (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY ), CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 4003 has stored thereon a computer program, and the processor 4001 reads the computer program stored in the memory 4003 through the communication bus 4002.
The computer program, when executed by the processor 4001, implements the smart device control method in the above embodiments.
Further, in an embodiment of the present application, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the smart device control method in each of the above embodiments.
In an embodiment of the application, a computer program product is provided, which comprises a computer program stored in a storage medium. The processor of the computer device reads the computer program from the storage medium, and the processor executes the computer program, so that the computer device executes the smart device control method in the above embodiments.
Compared with the related art, the user only needs to express the actual demand through the corpus, the client associated with the intelligent equipment is not required to be installed in advance, and corresponding voice instructions are not required to be configured in the running client in advance, so that the problem that the intelligent equipment in the related art is complicated to control can be effectively solved, the method is applicable to not only host type target objects, but also guest type target objects, and the use experience of the user is effectively improved.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The foregoing is only a partial embodiment of the present application, and it should be noted that it will be apparent to those skilled in the art that modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the present application.