The content of the invention
The first technical problem to be solved by the present invention is to need to provide a kind of human-computer interaction side for intelligent robotMethod and system, the program can ensure the continuity of dialogue, enhance interactive interest, improve the interactive experience of user.
In order to solve the above-mentioned technical problem, embodiments herein provides firstly a kind of dialogue for intelligent robotInteraction processing method, this method comprises the following steps:During intelligent robot is interacted with user session, context is parsedTalk with interactive information, generate corresponding topic label, the topic label is used for marking belonging to each round dialogue interactionTopic;The dialogue data of current round user output is obtained, is used with reference to the topic tag resolution of context dialogue interactive informationFamily is intended to;Interaction data is talked with according to the user view decision making.
Preferably, determine that model carries out topic label to every wheel dialogue and determines using topic label, the topic labelIt is to carry out deep learning training by the data to more wheel dialogues under same topic to be formed to determine model.
Preferably, in the step of talking with interaction data according to the user view decision making, selected from dialog databaseIt selects and is intended to the dialogue interaction content of the topic tag match and the current round user session of combination, generation dialogue interaction dataAnd export to user, wherein, the data of the dialog database are labeled with different topic labels.
Preferably, in the dialog database, the correspondence answer mode under different topic labels is set to same problem;After current round topic label is determined, answer mode, generation dialogue interaction data are corresponded to reference to it.
Preferably, further include:User identity is identified, judges whether active user is child user;If childrenUser is then based upon dialog database and dialogue label that child user builds come the interaction that engages in the dialogue.
Another aspect according to embodiments of the present invention additionally provides a kind of dialogue interaction process system for intelligent robotSystem, the system include following module:Topic label determining module, during intelligent robot is interacted with user session,Context dialogue interactive information is parsed, generates corresponding topic label, the topic label is used for marking each round dialogue friendshipTopic belonging to mutually;User view parsing module obtains the dialogue data of current round user output, talks with reference to contextThe topic tag resolution of interactive information obtains user view;Dialogue data generation module is given birth to according to the user view decision-makingInto dialogue interaction data.
Preferably, the topic label determining module determines that model carries out topic to every wheel dialogue using topic labelLabel determines that the topic label determines that model is to carry out deep learning instruction by the data to more wheel dialogues under same topicPractice what is formed.
Preferably, the dialogue data generation module, from dialog database selection and pair of the topic tag matchIt talks about interaction content and combines current round user session and be intended to, generation dialogue interaction data is simultaneously exported to user, wherein, it is described rightThe data of words database are labeled with different topic labels.
Preferably, in the dialog database, the correspondence answer mode under different topic labels is set to same problem;The dialogue data generation module after current round topic label is determined, corresponds to answer mode, generation dialogue is handed over reference to itMutual data.
Preferably, further include:User identity is identified in user identification module, whether judges active userFor child user;The dialogue data generation module when user is child user, is based upon the dialogue that child user is builtDatabase and topic label engage in the dialogue interaction.
Another aspect according to embodiments of the present invention additionally provides a kind of dialogue interactive system for intelligent robot,The system includes:Cloud server possesses dialogue interaction process system as described above;Intelligent robot, acquisition and useThe multi-modal interaction data of family interaction, and the multi-modal interaction data is sent to the cloud server, it is exported to userDialogue alternate statement from the cloud server.
Preferably, the artificial Story machine of the intelligence machine or chat robots.
Compared with prior art, one or more of said program embodiment can have the following advantages that or beneficial to effectFruit:
During the embodiment of the present invention in intelligent robot with user session by interacting, parsing context dialogue interactionInformation generates corresponding topic label, then, obtains the dialogue data of current round user output, talks with reference to context and hand overThe topic tag resolution of mutual information obtains user view, and talks with output data according to the user view decision making.This hairBright embodiment is by the method for deep learning, training topic label generation model, in this way, talk with for arbitrary wheel, it can be trueFixed corresponding topic label, after the voice messaging of user is received, with reference to current topic label, can generate same topicUnder output, ensure dialogue continuity, and then improve conversational quality, promoted user dialogue experience.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specificationIt obtains it is clear that being understood by implementing technical scheme.The purpose of the present invention and other advantages can bySpecifically noted structure and/or flow are realized and obtained in specification, claims and attached drawing.
Embodiment
Fig. 1 is the Story machine of the embodiment of the present application or the application scenarios schematic diagram of chat robots.In the application scenarios,Including intelligent robot (also referred to as " dialogue robot ") 20 and high in the clouds brain (cloud server) 10, the intelligent robot 20 is with usingFamily U carries out voice dialogue interaction.The robot 20 can also be to carry in addition to it can be tangible machine people shown in FIG. 1Robot application program on smart machine, smart machine can be traditional PC PCs, LapTop laptops,It line holographic projections equipment etc. or can carry and can be by wireless modes such as WLAN, mobile communications networksAccess the terminal device of internet.In the embodiment of the present application, wireless terminal includes but not limited to mobile phone, Netbook (net book)Generally there is multi-modal information acquisition and data transmission etc. Deng, wireless terminal.High in the clouds brain 10 is used as intelligent robot 20Brain end, be configured with dialogue interaction process system 100, which is used for handling the multimode of the transmission of intelligent robot 20State input data, such as parse vision data, complete visual identity, vision-based detection and, perform affection computation, cognition calculate andSemantic understanding etc. mainly talks with the voice data of user in interaction, so as to which decision-making goes out the dialogue to be output of robot 20Voice or other multi-modal output datas.
It is pointed out that the dialogue exchange method and system of this intelligent robot, are also applied for being suitble to children's AI equipment,Such as children-story machine (it is a kind of can meet pediatric population listen to music, children's AI equipment of story, national literature audio and video, which canPossess the cartoon IP images of animal and personage) conversational applications scene, set in addition, the Story machine can be controlled by intelligent handholdIt is standby, it is performed with completing the setting of intelligent robot and instruction.
Illustrate the composition and work(of intelligent robot of the present invention by taking the chat robots of entity form as an example belowEnergy.
Fig. 2 is the Story machine of the embodiment of the present application or the functional block diagram of chat robots.As shown in Fig. 2, the machinePeople is mainly the multi-modal interaction data gathered with user interaction, and multi-modal interaction data is sent to cloud server 10,The dialogue alternate statement from cloud server is exported to user.Robot control system mainly includes interactive information acquisition module2110th, communication module 2120, voice-output unit 2130, robot limb control unit 2210 and attitude transducer 2220.
Interactive information acquisition module 2110 gathers outside and interactively enters information, and specific include gathers extraneous voice messagingVoice collecting unit 2111, the touch sensor 2112 for gathering external touch pressure data and the image for gathering external image informationCollecting unit 2113.The multi-modal information that interactive information acquisition module 2110 collects is passed through networked interactive by communication module 2120Unit 2121 is sent at high in the clouds brain 10 and is handled, and receives the friendship for carrying out 10 decision-making of high in the clouds brain goes out response userTalk with output data or other multi-modal decision datas obtained from being mutually intended to.Networked interactive unit 2121 realizes communication module2120 with the data interaction of high in the clouds brain 10.Voice-output unit 2130 exports matched voice according to speech-controlled information and returnsIt should.Robot limb control unit 2210 exports matched robot limb control signal with driving machine according to action control informationThe limbs of device people make corresponding actions.2220 supervisory-controlled robot current pose of attitude transducer, thus can be to avoid robotIgnore the current posture of itself and perform action by force, hair situations such as so as to which movement posture mistake or disequilibrium be avoided to fall downIt is raw.
In view of the power supply requirement of each function module, data processing needs and difference functionally, the chat machineThe electronic control system of people is configured to master system and lower computer system two parts.Master system and slave computer systemIt unites and respectively includes one piece of independent master control borad respectively, the external circuit original paper of master system and lower computer system is connected to eachMaster control borad on.So the module that resource has conflict is separated on the premise of the integrated level of guarantee system entirety, so as to ensureThe stable and high effective operation of system.
In this example, different function modules is distributed to master system and slave computer system according to mode shown in Fig. 2In system.Specifically, robot limb control unit 2210 and attitude transducer 2220 are constructed in lower computer system 220In, other function modules are constructed in master system 210.
In the present embodiment, system can also include the electricity quantity display module of the current information about power of display robot.It examinesConsider electricity and show that the data processing amount needed is not high but certain power drives is needed to support (driving light emitting diode), becauseThis electricity quantity display module is arranged in lower computer system.In concrete operations, the master control borad acquisition of master system and transmitterThe current information about power of device people, electricity quantity display module export corresponding electricity according to information about power and show.
Further, understand the current interaction mode of robot for the ease of user, be also provided in lower computer systemShow the interactive display module of the current interaction mode of robot.In concrete operations, the master control borad acquisition of master system is simultaneouslyThe current interaction mode of distribution of machine people, interaction mode include recording state, voice/action output state and semantic parsing shapeState;Interactive display module exports corresponding interaction mode according to interaction mode and shows.
Hardware block diagram as shown in Figure 3, the master control borad of master system is the master based on full will double-core A20 processorsPlate is controlled, and is integrated with wireless networking module (WiFi), Mike's noise reduction module and audio amplification block.Wherein, A20 processors intoThe external Preprocessing for interactively entering information of row can generate the action directive of robot movement;WiFi networking modulesRealize the data interaction with high in the clouds brain 10;Mike's noise reduction module and the microphone being connected on master control borad realize extraneous voiceThe acquisition of information;Audio amplification block and the loud speaker being connected on master control borad realize the output of speech response.
The interface that host computer master control borad 210 provides has:Capacitance touch interface 212, three-wire interface, line sequence are power supply (VCC)(GND) output (OUT) is grounded, is connected to touch modules 204;Serial communication interface 216, three-wire interface, line sequence are ground connection(GND) uplink (RX) downlink (TX) is connected to the serial communication interface 217 of slave computer master control borad 220;Speaker interface 213,Two line interfaces, line sequence bears (Speaker-) for audio signal just (Speaker+) audio signal (in this example, to be had 2 to raise one's voiceDevice interface), it is connected to loud speaker 205;Microphone interface 211, two line interfaces, line sequence are the microphone signal is positive (Mic+) microphoneSignal bears (Mic-), is connected to microphone 203;Charge port 214, two line interfaces, line sequence are grounded (GND) for power supply (VCC),It is connected to robot charge port 201 and connects power management module 215;The battery charging interface of power management module 215, two linesInterface, line sequence are grounded (GND) for power input (DCIN), are connected to lithium battery 202.
The master control borad of lower computer system is the master control borad based on STMicw Electronics microcontroller STM32, is integrated with six thereonAxis attitude transducer MPU6500 and motor drive module.Wherein, microcontroller STM32 generates robot limb control signal;Six axis attitude transducer MPU6500 supervisory-controlled robot current poses;Motor drive module driving robot limb action.
The interface that slave computer master control borad 220 provides has:Power interface, two line interfaces, line sequence are VCC GND (not shown),The voltage stabilizing chip 223 of power management module passes through power interface and the power management of lithium battery 202 and host computer master control borad 210Module 215 is connected;Serial communication interface 227, three-wire interface, line sequence are GND RX TX, host computer master control borad 210 and bottomData transmission is realized by serial communication between machine master control borad 220;Three road motor interfaces, are two line interfaces per road, and line sequence is electricityMachine just (Motor+) motor is born (Motor-), and motor drives (224,225,226) to drive robot by three road motor interfacesMotor (231,232,233) (three motors are respectively two thigh motors, one arm motor) operates to realize that robot movesMake;System charge display interface, four-wire interface, line sequence export (IO) for output (IO) and export (IO) ground connection (GND), are connected to electricityMeasure display lamp 206 (electricity display lamp is multi-color LED (LED) lamp, and three I/O interfaces correspond to red R, green G, indigo plant B respectively);Interactive display interface, two line interfaces, line sequence are PWM GND, and being connected to interactive display lamp 207, (interaction display lamp is respiratory nasalLamp).
System upper and lower computer master control borad 210 and 220 is controlled by a physical switch.System boot process is:
System electrification, host computer master control borad 210 complete networking, initialization;
The interaction display lamp 207 of slave computer master control borad 220 is in breathing state and host computer master control borad 210 is waited to initializeInto.
Interaction:
The initialization of host computer master control borad 210 is completed, and upper and lower computer master control borad 210 and 220 is normal by serial ports (216,217)Transfer data;
Microphone 203 gathers audio signal, and the processing of host computer master control borad 210 is given by the noise reduction to audio and amplificationVoice messaging is passed to high in the clouds brain 10 by chip A20, A20 by networking module, and high in the clouds brain 10 returns to multi-modal decision-making numberA20 is passed back according to through networking module, and A20 controlling loudspeakers 205 feed back to user speech response, and at the same time A20 is by action controlInformation (needing the action performed), information about power and interaction mode information are sent to slave computer master control borad 220 by serial ports;
Slave computer master control borad 220 by serial ports receive A20 control instruction, complete electricity show, interaction show, leg andThe multi-modal interactive action such as hand motion.
During electricity is shown, system charge is showed by RGB tri coloured lanterns:R represents not enough power supply, and B is representing electricity justOften, G represents electricity abundance.Host computer master control borad 210 informs user's charge condition by loud speaker simultaneously.In interaction display processIn, pass through the LED light of PWM controls:Being always on prompting user, robot is in recording state at this time, and often go out prompting user machine at this timePeople's voice output, flash for prompting user robot carry out semantic parsing in networking.
The each component and function of the dialogue interactive system 100 of high in the clouds brain 10 are illustrated below.
As shown in figure 4, dialogue interactive system 100 includes topic label determining module 110, user view parsing module 120With dialogue data generation module 130.The function of above-mentioned modules is specifically described below.
Topic label determining module 110 during intelligent robot is interacted with user session, parses context pairInteractive information is talked about, generates corresponding topic label, which is used for marking the topic belonging to each round dialogue interaction.
Specifically, topic label determining module 110, the voice messaging after receiving communication module 2120 and forwardingAfterwards, respond the voice messaging and generate corresponding text message.First, language is carried out to the voice messaging after such as noise suppression preprocessingThe comprehensive analysis of sound identification, generates text message corresponding with voice messaging.Then, text analyzing is carried out to text message, i.e.,Obtain the specific semantic content of text.Specifically, after recognition result is obtained, identification is tied using natural language processing techniqueFruit carries out semantic parsing.Semantic analysis refers to for given natural language being converted into certain formalization table for reflecting its meaningShow, that is, by the mankind it will be appreciated that natural language be converted into computer it will be appreciated that formal language.Obtaining parsing knotAfter fruit, the semantic similarity (similarity of problem and problem) of the content in the analysis result and the knowledge base that has set is calculated,So as to the data that search matches with analysis result in knowledge base.So far, complete and the parsing for talking with interactive information is operated.
It, can be by judging to whether there is in obtained speech text information and the relevant spy of topic after semantic understandingVocabulary is determined to determine the topic of text information." specific vocabulary " is the prior vocabulary relevant with topic or short being setLanguage, for example, the contents such as the name of star, name of film.Moreover, those skilled in the art can use according to current web technologyLanguage or user demand update or addition " specific vocabulary ", the content for making database is more abundant, improves user experience.It can travel throughObtained speech text information and each specific vocabulary are carried out morphology similarity by each vocabulary in " specific vocabulary " databaseAnd/or Semantic Similarity Measurement, judge to whether there is corresponding specific vocabulary in speech text information.When morphology similarity is more thanThreshold value, and numerical value is very big, then can determine whether that speech text there are specific vocabulary, otherwise, calculates language without computing semantic similarityThe weighted sum of adopted similarity and morphology similarity is to determine whether there are specific vocabulary.The speech text information obtained on judgementIn whether there is specific vocabulary method, can also be realized by other technologies, do not limited herein.
If not finding specific vocabulary, anticipated according to the topic of former wheel dialogues to parse the user of dialogue interactive informationFigure, and topic is determined based on user view.Shown in following example:
Q:Nearest action movie《Warwolf 2》It sees very well, you have seen
A:I does not see.
When the A contents in talking with interactive information to the wheel carry out topic judgement, due to not finding matched specific wordIt converges, then can not merely be remitted by specific word and determine its subject information, therefore, with reference to theme-film of previous alternate statement《Warwolf 2》, then the user view that can determine A contents is " not see and flash back past events《Warwolf 2》", it is possible thereby to determine that topic is stillFilm《Warwolf 2》.The correspondence of dialogue alternate statement of the topic label with extracting the topic label is then preserved a certainIn memory, such as:
Q:Nearest action movie《Warwolf 2》It sees very well, you have seen【Theme-film《Warwolf 2》】
A:I does not see.【Theme-film《Warwolf 2》】
During the topic content of next round dialogue interactive information to be determined, handed over by transferring context dialogue from the memoryMutual theme can just complete topic and determine to handle well.
It is determined except through searching the mode of specific vocabulary beyond topic label, in a preferable example, topic markDetermining module 110 is signed, determines that model carries out topic label to every wheel dialogue and determines using topic label, topic mark thereinLabel determine that model is to carry out deep learning training by the data to more wheel dialogues under same topic to be formed.
Specific learning method is as follows:
Step 1, the sample information for being trained to default grader is obtained.In the present embodiment, multiple words are selectedMore wheel dialogue datas under topic, the sample dialogue data being utilized respectively under each topic are trained default grader.It is excellentSelection of land can gather and be used as sample data labeled as the voice of different themes by manual sort in history, before being trainedVoice messaging is converted into Text Mode.
Step 2, sample data is pre-processed, the noises text such as removal such as " eh, " therein is instructedPractice text.
Step 3, the text feature of training text is extracted.
Specifically, cutting word processing can be carried out according to default step-length to training text, it is special to obtain text based on cutting word resultSign.
Step 4, the text feature of training text is input in grader and be trained, obtain object classifiers.
After topic label determining module 110 is completed to turn text-processing to the voice of the often wheel dialogue interactive information of input,Obtained text input will be handled into object classifiers, you can obtain the theme of round dialogue interactive information.Pass through weMethod can be in the case of the content incorporeity information that speech-to-text is converted to, for example, dialogue includes the nothing of " without "It during the content of entity information, remains to accurately determine its subject content, and the upper specific vocabulary mentioned is looked into processing speedInquiry method is more quick.
User view parsing module 120 obtains the dialogue data of current round user output, talks with reference to context and hand overThe topic tag resolution of mutual information obtains user view.
For information content compared in the user session data of horn of plenty, for example, the content including entity information, user view solutionAnalysis module 120 can be operated according to the semantic understanding of topic label determining module 110, and voice messaging is generated corresponding text envelopeThen breath carries out semantic understanding and obtains user view.It is more succinct in view of certain user's conversation content, do not include generally havingThe content of practical significance, such as the content of these imperfect semantemes such as " not having ", " having seen ", " not seeing also ", can not simple rootCarry out the true intention of Direct Recognition user according to the corresponding text of voice, therefore, when parsing user view, preferably in combination with contextTalk with the topic label of interaction to be identified.
With reference to above example, the analysis result obtained by speech recognition is " I does not see ", and a upper conversation contentSubject information is:Film-《Warwolf 2》, therefore, the true intention of this dialogue interaction is can determine by the way that the two is combined is" there is no film《Warwolf 2》”.For compared with prior art, if merely obtained according to the analysis result of current session contentUser view can be varied, then easilying lead to the voice messaging replied, there are larger deviation, bands with actual intentionCarry out bad user experience, and this example by combine the subject information that context is talked with determine to be intended to then can preferably solve it is above-mentionedProblem.
Dialogue data generation module 130 talks with interaction data according to user view decision making.
Specifically, dialogue data generation module 130, from the selection of dialog database 140 and the dialogue of topic tag matchInteraction content simultaneously combines current round user session intention, and generation dialogue interaction data is simultaneously exported to user, wherein, dialogue dataThe data in storehouse 140 are labeled with different topic labels.
In dialog database 140, the correspondence answer mode under different topic labels is set to same problem.It is specific nextSay, in the database 140, store the list of problem and response content, structure be by same problem may (probability compared withGreatly) corresponding response mode arranges out, and carries out topic mark for the problem of imperfect semanteme according to response content.Such asLower shown list:
Wherein specific topic label determines method, and topic label may be employed and determine model to realize, will reply interiorHold input into the model, and then obtain corresponding label.By stamping topic label to the data in database so that work as userThe problem of input, Q in multiple topics when be likely to occur, and can select appropriate reply.
After current round topic label is determined, answer mode is corresponded to reference to it for dialogue data generation module 130, rawInto dialogue interaction data.There is matched output masterplate for the response mode set in database, given birth to based on the output masterplateInto dialogue interaction data.
Topic 1:
Q:There is good-looking film to recommend
A:It hears《Roman Holiday》Well.
Q:It has seen.
A2 (correct):I recommends one to you again, has just shown《Wolf war 2》It is well received.
Topic 2:
Q:I catches a cold.
A:Pitiful is well-behaved, goes to have seen doctor
Q:It has seen.
A2:That is all right, what did the doctor say
In addition, in other examples, as shown in figure 5, the dialogue interaction process system 100 of the present invention can also include userIdentification module 150, is identified user identity, judges whether active user is child user;If child user,It is based upon dialog database that child user builds and topic label engages in the dialogue interaction.
It specifically may be referred to following child user recognition methods.For example, image acquisition units 2113 gather active userFace information, the user identification module 150 of high in the clouds brain 10 is sent to by communication module 2120, elder generation is from sceneIt detects the presence of face and determines its position.Then, after face is detected, recognition of face is carried out, will be had detected thatFace to be identified matches compared with the different type face in database, obtains relevant information.Recognition of face can adoptThe method for taking the method and template matches of extraction Face geometric eigenvector, the method that template matches are preferentially taken in this example.In addition, alsoIt can identify whether active user is child user by way of sound characteristic detection, such as to voice that user is inputtedIt is identified, judges whether voice is child's voice.In the present embodiment, voice is provided in user identification module 150 in advanceIdentification model, the voice that can be inputted by speech recognition modeling to user are identified, to determine the classification of the voice.It shouldSpeech recognition modeling can be machine learning model, after the training and study to great amount of samples data, the machine learning mouldType can classify to the classification of voice., it is necessary to be trained to grader before testing voice, to getObject classifiers.Specifically include following steps:
Step 1, the sample voice for being trained to default grader is obtained.It, can be to children in the present embodimentSound carry out sampling as sample voice, default grader is trained using the sample voice that these are collected.It is excellentSelection of land can gather and be used as sample voice labeled as the voice of child's voice by manual sort in history.
Step 2, to sample voice carry out voice activity detection removal training data in it is mute, obtain train voice.
Step 3, the acoustic feature of training voice is extracted.
Specifically, framing can be carried out according to default step-length to training voice, then according to default step-length to training voiceEvery frame extraction acoustic feature, wherein, acoustic feature can be wave filter group (Filter bank40, abbreviation Fbank40) featureOr mel-frequency cepstrum coefficient (Mel-Frequency Cepstral Coefficients, abbreviation MFCC) feature.
Step 4, the acoustic feature of training voice is input in grader and be trained, obtain object classifiers.
User identification module 150 is set to be mainly in view of:It is more various in dialog database in high in the clouds brain 10,Sensitive content or the bad network information may be included.User identification module 150 is after child user is identified, cloudInteraction that end brain 10 is further selected as the dialog database that child user builds and dialogue label engages in the dialogue, can so keep awayExempt to send sensitive content to the child user, prevent that the physical and mental health to children from adversely affecting.
The structure for the dialog database built for child user is similar with foregoing database structure, but question and answer thereinContent mainly includes the intelligence developments class contents such as children education, amusement, and some sensitive, hard to understand information are then shielded, baseIn this kind of dialog database, the dialogue suitable for children can be targetedly exported, and similar to this sensitivity of adult entertainment's cultureProperty content will not then be pushed in human-computer interaction.
Fig. 6 is the flow diagram of the example one of the dialogue exchange method for intelligent robot of the embodiment of the present application.Illustrate the interaction flow of this interactive system with reference to Fig. 6.
As shown in fig. 6, in step S610, topic label determining module 110 is interacted in intelligent robot with user sessionDuring, parsing context dialogue interactive information generates corresponding topic label, and the topic label is used for marking each roundTopic belonging to secondary dialogue interaction;In step S620, user view parsing module 120 obtains pair of current round user outputData are talked about, user view is obtained with reference to the topic tag resolution of context dialogue interactive information;In step s 130, dialogue dataGeneration module 130 talks with interaction data according to the user view decision making.
In embodiments of the present invention, by the method for deep learning, training topic label generation model is appointed in this way, being directed toMeaning wheel dialogue, can determine corresponding topic label, after the voice messaging of user is received, with reference to current topic markLabel, can generate the output under same topic, ensure the continuity of dialogue, and then improve conversational quality, promote the dialogue of userExperience.
Supplementary notes
During the artificial Story machine of intelligence machine in the present embodiment, in addition to the characteristics of being described above, further may be used alsoTo possess following feature.
(1) Story machine with wechat can be realized and interconnect as a part for family's Internet of Things;
(2) possesses program request, the functions such as collection, voice control interrupt, sound;
(3) possess OCR (Optical Character Recognition, optical character identification) function, realize that voice is readPaint the function of this and reading;
(4) can content push actively be carried out according to the hobby of child user.
Since the method for the present invention describes what is realized in computer systems.The computer system can for example be setIn control core processor.For example, method described herein can be implemented as that software can be performed with control logic, byCPU in operating system is performed.Function as described herein can be implemented as being stored in readable Jie of non-transitory tangible computerProgram instruction set in matter.When implemented in this fashion, which includes one group of instruction, when the group is instructed by countingIt promotes computer to perform the method that can implement above-mentioned function when calculation machine is run.Programmable logic can be installed temporarily or permanentlyIn non-transitory visible computer readable medium, such as ROM chip, computer storage, disk or other storagesMedium.Except with software come in addition to realizing, logic as described herein can utilize discrete parts, integrated circuit and programmable logicEquipment (such as, field programmable gate array (FPGA) or microprocessor) be used in combination programmable logic or including themAny other equipment of any combination embodies.All such embodiments are intended to fall under within the scope of the present invention.
It should be understood that disclosed embodiment of this invention is not limited to processing step disclosed herein, and should prolongReach the equivalent substitute for these features that those of ordinary skill in the related art are understood.It is to be further understood that it uses hereinTerm be used only for the purpose of describing specific embodiments, and be not intended to limit.
" one embodiment " or " embodiment " mentioned in specification means the special characteristic described in conjunction with the embodiments, structureOr characteristic is included at least one embodiment of the present invention.Therefore, the phrase " reality that specification various places throughout occursApply example " or " embodiment " same embodiment might not be referred both to.
While it is disclosed that embodiment as above, but the content only to facilitate understand the present invention and adoptEmbodiment is not limited to the present invention.Any those skilled in the art to which this invention pertains are not departing from thisOn the premise of the disclosed spirit and scope of invention, any modification and change can be made in the implementing form and in details,But the scope of patent protection of the present invention, still should be subject to the scope of the claims as defined in the appended claims.