Movatterモバイル変換


[0]ホーム

URL:


CN105527862B - A kind of information processing method and the first electronic equipment - Google Patents

A kind of information processing method and the first electronic equipment
Download PDF

Info

Publication number
CN105527862B
CN105527862BCN201410509682.9ACN201410509682ACN105527862BCN 105527862 BCN105527862 BCN 105527862BCN 201410509682 ACN201410509682 ACN 201410509682ACN 105527862 BCN105527862 BCN 105527862B
Authority
CN
China
Prior art keywords
information
electronic equipment
sensing
path
sound wave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410509682.9A
Other languages
Chinese (zh)
Other versions
CN105527862A (en
Inventor
杨碧波
李洪伟
黄绍华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing LtdfiledCriticalLenovo Beijing Ltd
Priority to CN201410509682.9ApriorityCriticalpatent/CN105527862B/en
Publication of CN105527862ApublicationCriticalpatent/CN105527862A/en
Application grantedgrantedCritical
Publication of CN105527862BpublicationCriticalpatent/CN105527862B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

The invention discloses a kind of information processing method and the first electronic equipments, wherein, the described method includes: the path where the first sensing module in described two sensing modules and the second sensing module and the user between emission source is identified as the first sensing detection and localization path and the second sensing detection and localization path;The sound wave is detected on first sensing detection and localization path, obtains the first information;In the second sensing detection and localization path detection to the sound wave, the second information is obtained;The operation result obtained according to the first information and second information operation positions emission source where the user, obtains third information;Parse the voice command carried in the sound wave, when meeting a preset rules, it assists institute's speech commands to execute the first processing by the third information, makes it possible to carry out corresponding voice control at least one second electronic equipment according to the result that the first processing obtains.

Description

A kind of information processing method and the first electronic equipment
Technical field
The present invention relates to mechanics of communication more particularly to a kind of information processing methods and the first electronic equipment.
Background technique
Present inventor at least has found exist in the related technology during realizing the embodiment of the present application technical solutionFollowing technical problem:
Existing speech recognition interaction, since its realization mechanism is the identification based on special sound content, Zhi NengshiFor single user, the speech recognition scene of single equipment, for example, a certain controlled device is the desk lamp for having voice control function, userIssuing the voice command for being controlled the controlled device by the emission source at place is " turning on light " or " turning off the light ", is knownNot Chu the voice content that carries of voice command when being " turning on light " or " turning off the light ", will correspond to open and the desk lamp or close thisLamp, moreover, for single equipment, in addition to according to voice content, be can not according to the location information of emission source itself, such as direction orTowards carrying out assistant voice content recognition, such as emission source edge emitting to the left, then corresponding " turning on light ", emission source edge emitting to the right,Then corresponding " turning off the light " etc..
To the speech recognition scene of multiple controlled devices, which is just more difficult to solve, the relevant technologiesIn, for the problem, it there is no effective solution.
Summary of the invention
In view of this, the embodiment of the present invention is desirable to provide a kind of information processing method and the first electronic equipment, at least solveThe above problem of the existing technology.
The technical solution of the embodiment of the present invention is achieved in that
The embodiment of the invention discloses a kind of information processing methods, and the method is applied in the first electronic equipment, describedFirst electronic equipment includes at least two sets sensing mould groups, the sound of emission source transmitting where the sensing mould group is used to detect userWave;Every set sensing mould group includes two sensing modules, which comprises
By emission source where the first sensing module in described two sensing modules and the second sensing module and the user itBetween path, be identified as the first sensing detection and localization path and second sensing detection and localization path;
The sound wave is detected on first sensing detection and localization path, obtains the first information;
In the second sensing detection and localization path detection to the sound wave, the second information is obtained;
The operation result obtained according to the first information and second information operation is to transmitting where the userSource is positioned, and third information is obtained;
The voice command carried in the sound wave is parsed, when meeting a preset rules, is assisted by the third informationInstitute's speech commands execute the first processing, make it possible to according to the first obtained result of processing at least one second electronic equipment intoThe corresponding voice control of row.
Preferably, the first information detects that the sound wave reaches institute to sense described first on detection and localization pathState first time spent by the first sensing module;
Second information is to detect that the sound wave reaches described second on second sensing detection and localization pathSecond time spent by sensing module.
Preferably, the operation result obtained according to the first information and second information operation is to the userPlace emission source is positioned, and third information is obtained, comprising:
It is time difference according to the operation result that the first time and second temporal calculation obtain;
It is converted to obtain an angle value according to the time difference, the angle value is for characterizing the first sensing positioning inspectionSurvey the corner dimension in the calibration path that path meets preset condition with one;
It is obtained by the position line of first sensing module and second sensing module and the angle value describedDemarcate path;
The position of emission source where the first position demarcated by at least two calibration paths is determined as the userIt sets.
Preferably, described to assist institute's speech commands to execute the first processing by the third information, it makes it possible to according to theThe result that one processing obtains carries out corresponding voice control at least one second electronic equipment, comprising:
Obtain the first position;
Obtain the position of at least one second electronic equipment;
Distance difference is obtained according to the position operation of the first position and at least one second electronic equipment;
The hair where the user is selected from least one described second electronic equipment according to the distance differenceThe second electronic equipment that source meets a threshold value is penetrated, corresponding voice control is carried out to the second electronic equipment that selection obtains.
Preferably, the first information detects that the sound wave reaches institute to sense described first on detection and localization pathState the first intensity corresponding to the first sensing module;
Second information is to detect that the sound wave reaches described second on second sensing detection and localization pathSecond intensity corresponding to sensing module.
Preferably, the operation result obtained according to the first information and second information operation is to the userPlace emission source is positioned, and third information is obtained, comprising:
It is that first intensity is big according to the operation result that first intensity and the second intensity operation obtainWhen second intensity, hair where the first direction in correspondence first sensing detection and localization path is determined as the userPenetrate the direction in source.
Preferably, described to assist institute's speech commands to execute the first processing by the third information, it makes it possible to according to theThe result that one processing obtains carries out corresponding voice control at least one second electronic equipment, comprising:
Obtain the first direction;
Obtain the position of at least one second electronic equipment;
The second electronic equipment for being directed toward the first direction from least one described second electronic equipment chooses,Corresponding voice control is carried out to the second electronic equipment that selection obtains.
The embodiment of the invention provides a kind of first electronic equipment, first electronic equipment includes at least two sets sensing mouldsGroup, the sound wave of emission source transmitting where the sensing mould group is used to detect user;Every set sensing mould group includes two sensing modules,First electronic equipment further include:
Detect path determining unit, for by the first sensing module in described two sensing modules and the second sensing module withPath where the user between emission source is identified as the first sensing detection and localization path and the second sensing detection and localizationPath;
First acquisition unit obtains first for detecting the sound wave on first sensing detection and localization pathInformation;
Second acquisition unit, for obtaining the second letter in the second sensing detection and localization path detection to the sound waveBreath;
Positioning unit, the operation result for being obtained according to the first information and second information operation is to describedEmission source where user positions, and obtains third information;
Processing unit is controlled, for parsing the voice command carried in the sound wave, when meeting a preset rules, is passed throughThird information auxiliary institute's speech commands execute the first processing, and the result for making it possible to be obtained according to the first processing is at least oneA second electronic equipment carries out corresponding voice control.
Preferably, the first information detects that the sound wave reaches institute to sense described first on detection and localization pathState first time spent by the first sensing module;
Second information is to detect that the sound wave reaches described second on second sensing detection and localization pathSecond time spent by sensing module.
Preferably, the positioning unit includes:
First operation subelement, the operation knot for being obtained according to the first time and second temporal calculationFruit is time difference;
Second operation subelement obtains an angle value for converting according to the time difference, and the angle value is used for tableLevy the corner dimension in the calibration path that the first sensing detection and localization path meets preset condition with one;
Third operation subelement, for by first sensing module and second sensing module position line withThe angle value obtains the calibration path;
Position locator unit, it is described for the first position demarcated by at least two calibration paths to be determined asThe position of emission source where user.
Preferably, the control processing unit includes:
First obtains subelement, for obtaining the first position;
Second obtains subelement, for obtaining the position of at least one second electronic equipment;
First processing subelement, for according to the first position and the position of at least one second electronic equipment fortuneCalculation obtains distance difference;
Second processing subelement, for being selected from least one described second electronic equipment according to the distance differenceThe second electronic equipment for meeting a threshold value apart from emission source where the user carries out pair the second electronic equipment that selection obtainsThe voice control answered.
Preferably, the first information detects that the sound wave reaches institute to sense described first on detection and localization pathState the first intensity corresponding to the first sensing module;
Second information is to detect that the sound wave reaches described second on second sensing detection and localization pathSecond intensity corresponding to sensing module.
Preferably, the positioning unit, comprising:
Direction locator unit, the operation knot for being obtained according to first intensity and the second intensity operationIt is when fruit is that first intensity is greater than second intensity, the first direction in correspondence first sensing detection and localization path is trueThe direction of emission source where being set to the user.
Preferably, the control processing unit includes:
Third obtains subelement, for obtaining the first direction;
4th obtains subelement, for obtaining the position of at least one second electronic equipment;
Third handles subelement, for being directed toward the first direction from least one described second electronic equipmentTwo electronic equipments choose, and carry out corresponding voice control to the second electronic equipment that selection obtains.
The information processing method of the embodiment of the present invention, the method are applied in the first electronic equipment, first electronicsEquipment includes at least two sets sensing mould groups, the sound wave of emission source transmitting where the sensing mould group is used to detect user;Every set passesFeeling mould group includes two sensing modules, which comprises passes the first sensing module in described two sensing modules and secondPath where feeling module and the user between emission source is identified as the first sensing detection and localization path and the second sensingDetection and localization path;The sound wave is detected on first sensing detection and localization path, obtains the first information;DescribedTwo sensing detection and localization path detections obtain the second information to the sound wave;According to the first information and second informationThe operation result that operation obtains positions emission source where the user, obtains third information;Parse the sound waveThe voice command of middle carrying when meeting a preset rules, assists institute's speech commands to execute at first by the third informationReason makes it possible to carry out corresponding voice control at least one second electronic equipment according to the result that the first processing obtains.UsingThe embodiment of the present invention can solve the above problem of the existing technology.
Detailed description of the invention
Fig. 1 is an implementation process schematic diagram of embodiment of the present invention method one;
Fig. 2 is an implementation process schematic diagram of embodiment of the present invention method two;
Fig. 3 is an implementation process schematic diagram of embodiment of the present invention method three;
Fig. 4 is the schematic diagram using a sensor array of the embodiment of the present invention;
Fig. 5 is the schematic diagram using another sensor array of the embodiment of the present invention;
Fig. 6 is the positioning schematic diagram using one application scenarios of the embodiment of the present invention;
Fig. 7 is a composed structure schematic diagram of electronic equipment embodiment one of the present invention.
Specific embodiment
The implementation of technical solution is described in further detail with reference to the accompanying drawing.
Embodiment of the method one:
The embodiment of the invention provides a kind of information processing methods, and the method is applied in the first electronic equipment, describedFirst electronic equipment includes at least two sets sensing mould groups, the sound of emission source transmitting where the sensing mould group is used to detect userWave;Every set sensing mould group includes two sensing modules, as shown in Figure 1, which comprises
Step 101, will be where the first sensing module in described two sensing modules and the second sensing module and the userPath between emission source is identified as the first sensing detection and localization path and the second sensing detection and localization path;
Step 102 detects the sound wave on first sensing detection and localization path, obtains the first information;
Step 103 senses detection and localization path detection to the sound wave described second, obtains the second information;
Here, the first information and second information are divided into two types, such as time according to the difference of application scenariosOr intensity, rear extended meeting are specifically described, for example are compared by time difference or intensity and carried out position positioning or direction positioning.Wherein, by forceDegree includes phase intensity or audio intensity etc..
Step 104, the operation result obtained according to the first information and second information operation are to the userPlace emission source is positioned, and third information is obtained;
Here, the position of emission source and/or sounding direction where family can be used in the third information, wherein Ke YitongSpending the above-mentioned time difference obtains the position;The direction can also be relatively obtained by intensity.
Step 105 parses the voice command carried in the sound wave, when meeting a preset rules, passes through the thirdInformation assists institute's speech commands to execute the first processing, and the result for making it possible to be obtained according to the first processing is at least one the second electricitySub- equipment carries out corresponding voice control.
Here, the processing can be language identification, and first electronic equipment includes being made of at least two sensorsThe sensor module, sensor is a kind of specific implementation of the sensing module;Corresponding first electronic equipment, it is describedSecond electronic equipment refers to " controlled device " of the voice command control issued by emission source where user, such as TV or refrigeratorSmart home ".
Using the embodiment of the present invention, the first sensing detection and localization path and the second sensing positioning inspection are obtained by step 101Survey path;The first information and the second information are obtained by step 102-103;The first information and the second letter are based on by step 104Breath obtains third information, and the acoustic location so as to be detected according to sensor goes out the corresponding third information of user's emission source, such as positionIt sets and/or direction;Third information itself or third information are based on as auxiliary information, auxiliary language order by step 105The first processing is carried out, to realize the voice control to the second electronic equipment.
In one preferred embodiment of the embodiment of the present invention, the first information is on first sensing detection and localization roadDetect that the sound wave reaches first time spent by first sensing module on diameter;Second information is described theDetect that the sound wave reaches the second time spent by second sensing module on two sensing detection and localization paths.
Embodiment of the method two:
The embodiment of the invention provides a kind of information processing methods, and the method is applied in the first electronic equipment, describedFirst electronic equipment includes at least two sets sensing mould groups, the sound of emission source transmitting where the sensing mould group is used to detect userWave;Every set sensing mould group includes two sensing modules, as shown in Figure 2, which comprises
Step 201, will be where the first sensing module in described two sensing modules and the second sensing module and the userPath between emission source is identified as the first sensing detection and localization path and the second sensing detection and localization path;
Step 202 detects the sound wave on first sensing detection and localization path, obtains at the first time;
Step 203 senses detection and localization path detection to the sound wave described second, obtains for the second time;
Step 204, the operation result obtained according to the first time and second temporal calculation are the time differenceValue;
Step 205 converts to obtain an angle value according to the time difference, and the angle value is passed for characterizing described firstThe corner dimension in the calibration path that sense detection and localization path meets preset condition with one;
Here, the calibration path can be the neutrality line of two sensor position lines;
Step 206 passes through the position line of first sensing module and second sensing module and the angle valueObtain the calibration path;
The first position demarcated by at least two calibration paths is determined as sending out where the user by step 207Penetrate the position in source;
Here, the first position can be the position in the crosspoint of line segment where at least two calibration paths.
Step 208 parses the voice command carried in the sound wave, when meeting a preset rules, passes through described firstLocation assistance institute's speech commands execute the first processing, and the result for making it possible to be obtained according to the first processing is at least one the second electricitySub- equipment carries out corresponding voice control.
Here, the processing can be language identification, and first electronic equipment includes being made of at least two sensorsThe sensor module, sensor is a kind of specific implementation of the sensing module;Corresponding first electronic equipment, it is describedSecond electronic equipment refers to " controlled device " of the voice command control issued by emission source where user, such as TV or refrigeratorSmart home ".
It is described to assist institute's speech commands to hold by the third information in one preferred embodiment of the embodiment of the present inventionThe processing of row first makes it possible to carry out corresponding voice control at least one second electronic equipment according to the result that the first processing obtainsSystem, comprising:
Obtain the first position;
Obtain the position of at least one second electronic equipment;
Distance difference is obtained according to the position operation of the first position and at least one second electronic equipment;
The hair where the user is selected from least one described second electronic equipment according to the distance differenceThe second electronic equipment that source meets a threshold value is penetrated, corresponding voice control is carried out to the second electronic equipment that selection obtains.
Embodiment of the method three:
The embodiment of the invention provides a kind of information processing methods, and the method is applied in the first electronic equipment, describedFirst electronic equipment includes at least two sets sensing mould groups, the sound of emission source transmitting where the sensing mould group is used to detect userWave;Every set sensing mould group includes two sensing modules, as shown in Figure 3, which comprises
Step 301, will be where the first sensing module in described two sensing modules and the second sensing module and the userPath between emission source is identified as the first sensing detection and localization path and the second sensing detection and localization path;
Step 302 detects the sound wave on first sensing detection and localization path, obtains the first intensity;
Step 303 senses detection and localization path detection to the sound wave described second, obtains the second intensity;
Step 304, the operation result obtained according to first intensity and the second intensity operation are described theWhen one intensity is greater than second intensity, the first direction in correspondence first sensing detection and localization path is determined as the useThe direction of emission source where family;
Step 305 parses the voice command carried in the sound wave, when meeting a preset rules, passes through described firstDirection assists institute's speech commands to execute the first processing, and the result for making it possible to be obtained according to the first processing is at least one the second electricitySub- equipment carries out corresponding voice control.
Here, the processing can be language identification, and first electronic equipment includes being made of at least two sensorsThe sensor module, sensor is a kind of specific implementation of the sensing module;Corresponding first electronic equipment, it is describedSecond electronic equipment refers to " controlled device " of the voice command control issued by emission source where user, such as TV or refrigeratorSmart home ".
It is described to assist institute's speech commands to hold by the third information in one preferred embodiment of the embodiment of the present inventionThe processing of row first makes it possible to carry out corresponding voice control at least one second electronic equipment according to the result that the first processing obtainsSystem, comprising:
Obtain the first direction;
Obtain the position of at least one second electronic equipment;
The second electronic equipment for being directed toward the first direction from least one described second electronic equipment chooses,Corresponding voice control is carried out to the second electronic equipment that selection obtains.
The embodiment of the present invention is described below by taking a practical application scene as an example:
This scene is to be based on microphone array and people based on the user speech instruction identification scheme of stereo sound sensing positioningEar stereo sound positions bionical, can judge to use according to the sound time difference and intensity difference that alternative sounds sensor receivesPosition and/or sounding direction of the family apart from smart home device (controlled device controlled by sound instruction), this positionIt sets and/or sounding is towards that can assist semantics recognition, auxiliary semantics recognition is for localization of sound source, to provide the identification of enhancingAbility and effect.
Such as: user issues identical voice to different directions, can be judged as two equipment to different directionsOne of issue instruction, or be interpreted the different instruction for same equipment.It can simplify the voice operating of user in this way.ToolFor body, identical voice is issued to different directions by user, so that it may be judged as two equipment to different directionsOne of issue instruction, to obtain that user distance which controlled device is close or signal is strong, then come to define controlled set for whichStandby voice, automatic identification, and be simplified to and go to some without user and gone in front of equipment to carry out voice input control;Pass through needleTo the different instruction of same equipment, so that the judgement with direction is different, to define different voice commands, with regard to carrying out without userVoice has input.
Here, microphone array: the multiple sound transducers being mounted on indoor wall.The position of installation can be waterFlat, fixed intervals are arranged as a line, intersect as shown in figure 4, being also possible to horizontal and vertical direction, as shown in figure 5, in turn alsoA rectangle plane can be arranged as with fixed intervals.
Here, for the judgement of above-mentioned position of articulation, the positioning of user pronunciation and several factors: room are needed towards judgementBetween 3 dimensional views;Coordinate of each sound transducer on the view;Coordinate of the controlled device on the view;User's hairInstruction sound out reaches time difference or the phase difference of each sensor.
When sensor position is fixed known, by the time difference for the sound that any two sensor receives, such as Fig. 6 instituteShow, so that it may judge sound source and any two sensor (sensor pair that such as sensor 1 and sensor 2 are constituted;OrAnother sensor that sensor 2 and sensor 3 are constituted to) angular relationship (ratio of line neutrality line (shown in Fig. 6 chain lines)Such as, the sensor that sensor 1 and sensor 2 are constituted is to corresponding angle a);In conjunction with sound source and sensor more than twoPair angular relationship (i.e. two or more straight lines, shown in Fig. 6 chain lines) can be obtained by spatial position (i.e. Fig. 6 of sound sourceIn two dashdotted crosspoints show the position of sound source).
When number of sensors and density increase, error and noise can be further eliminated, location information is just more accurate.
Here, it for pronouncing towards for judgement, due to the sound shielding action of human skull and facial muscle, is shaken by vocal cordsThe sound issued is moved, the intensity on different directions is different, i.e. the directive property of sound.According to being placed in four, room wallThe intensity of sound that upper multiple sensors receive is poor, it can be determined that goes out the direction of sound source.
In addition, the principle about stereo sound positioning, can also be positioned by binaural effect, ears time difference and soundEtc. technologies realize, do not repeat them here.
It need to be noted that: the description of following electronic equipment item, with the above method description be it is similar, with methodBeneficial effect description, does not repeat them here.For undisclosed technical detail in electronic equipment embodiment of the present invention, the present invention is please referred toThe description of embodiment of the method.
Electronic equipment embodiment one:
The embodiment of the invention provides the first electronic equipment, first electronic equipment includes at least two sets sensing mould groups,The sound wave of emission source transmitting where the sensing mould group is used to detect user;Every set sensing mould group includes two sensing modules, such asShown in Fig. 7, first electronic equipment further include:
Detect path determining unit, for by the first sensing module in described two sensing modules and the second sensing module withPath where the user between emission source is identified as the first sensing detection and localization path and the second sensing detection and localizationPath;
First acquisition unit obtains first for detecting the sound wave on first sensing detection and localization pathInformation;
Second acquisition unit, for obtaining the second letter in the second sensing detection and localization path detection to the sound waveBreath;
Positioning unit, the operation result for being obtained according to the first information and second information operation is to describedEmission source where user positions, and obtains third information;
Processing unit is controlled, for parsing the voice command carried in the sound wave, when meeting a preset rules, is passed throughThird information auxiliary institute's speech commands execute the first processing, and the result for making it possible to be obtained according to the first processing is at least oneA second electronic equipment carries out corresponding voice control.
In one preferred embodiment of the embodiment of the present invention, the first information is on first sensing detection and localization roadDetect that the sound wave reaches first time spent by first sensing module on diameter;
Second information is to detect that the sound wave reaches described second on second sensing detection and localization pathSecond time spent by sensing module.
In one preferred embodiment of the embodiment of the present invention, the positioning unit includes:
First operation subelement, the operation knot for being obtained according to the first time and second temporal calculationFruit is time difference;
Second operation subelement obtains an angle value for converting according to the time difference, and the angle value is used for tableLevy the corner dimension in the calibration path that the first sensing detection and localization path meets preset condition with one;
Third operation subelement, for by first sensing module and second sensing module position line withThe angle value obtains the calibration path;
Position locator unit, it is described for the first position demarcated by at least two calibration paths to be determined asThe position of emission source where user.
In one preferred embodiment of the embodiment of the present invention, the control processing unit includes:
First obtains subelement, for obtaining the first position;
Second obtains subelement, for obtaining the position of at least one second electronic equipment;
First processing subelement, for according to the first position and the position of at least one second electronic equipment fortuneCalculation obtains distance difference;
Second processing subelement, for being selected from least one described second electronic equipment according to the distance differenceThe second electronic equipment for meeting a threshold value apart from emission source where the user carries out pair the second electronic equipment that selection obtainsThe voice control answered.
In one preferred embodiment of the embodiment of the present invention, the first information is on first sensing detection and localization roadDetect that the sound wave reaches the first intensity corresponding to first sensing module on diameter;
Second information is to detect that the sound wave reaches described second on second sensing detection and localization pathSecond intensity corresponding to sensing module.
In one preferred embodiment of the embodiment of the present invention, the positioning unit, comprising:
Direction locator unit, the operation knot for being obtained according to first intensity and the second intensity operationIt is when fruit is that first intensity is greater than second intensity, the first direction in correspondence first sensing detection and localization path is trueThe direction of emission source where being set to the user.
In one preferred embodiment of the embodiment of the present invention, the control processing unit includes:
Third obtains subelement, for obtaining the first direction;
4th obtains subelement, for obtaining the position of at least one second electronic equipment;
Third handles subelement, for being directed toward the first direction from least one described second electronic equipmentTwo electronic equipments choose, and carry out corresponding voice control to the second electronic equipment that selection obtains.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through itIts mode is realized.Apparatus embodiments described above are merely indicative, for example, the division of the unit, onlyA kind of logical function partition, there may be another division manner in actual implementation, such as: multiple units or components can combine, orIt is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed each composition portionMutual coupling or direct-coupling or communication connection is divided to can be through some interfaces, the INDIRECT COUPLING of equipment or unitOr communication connection, it can be electrical, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member, which can be or may not be, to be physically separated, aobvious as unitThe component shown can be or may not be physical unit, it can and it is in one place, it may be distributed over multiple network listsIn member;Some or all of units can be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can be fully integrated in one processing unit, it can alsoTo be each unit individually as a unit, can also be integrated in one unit with two or more units;It is above-mentionedIntegrated unit both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above method embodiment can pass throughThe relevant hardware of program instruction is completed, and program above-mentioned can be stored in a computer readable storage medium, the programWhen being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned include: movable storage device, it is read-onlyMemory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk orThe various media that can store program code such as person's CD.
If alternatively, the above-mentioned integrated unit of the present invention is realized in the form of software function module and as independent productWhen selling or using, it also can store in a computer readable storage medium.Based on this understanding, the present invention is implementedSubstantially the part that contributes to existing technology can be embodied in the form of software products the technical solution of example in other words,The computer software product is stored in a storage medium, including some instructions are used so that computer equipment (can be withIt is personal computer, server or network equipment etc.) execute all or part of each embodiment the method for the present invention.And storage medium above-mentioned includes: that movable storage device, ROM, RAM, magnetic or disk etc. are various can store program codeMedium.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, anyThose familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all containLid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.

Claims (14)

CN201410509682.9A2014-09-282014-09-28A kind of information processing method and the first electronic equipmentActiveCN105527862B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201410509682.9ACN105527862B (en)2014-09-282014-09-28A kind of information processing method and the first electronic equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201410509682.9ACN105527862B (en)2014-09-282014-09-28A kind of information processing method and the first electronic equipment

Publications (2)

Publication NumberPublication Date
CN105527862A CN105527862A (en)2016-04-27
CN105527862Btrue CN105527862B (en)2019-01-15

Family

ID=55770157

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201410509682.9AActiveCN105527862B (en)2014-09-282014-09-28A kind of information processing method and the first electronic equipment

Country Status (1)

CountryLink
CN (1)CN105527862B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107728482A (en)*2016-08-112018-02-23阿里巴巴集团控股有限公司Control system, control process method and device
KR102338376B1 (en)*2017-09-132021-12-13삼성전자주식회사An electronic device and Method for controlling the electronic device thereof
CN110112801B (en)*2019-04-292023-05-02西安易朴通讯技术有限公司Charging method and charging system
CN110299865B (en)*2019-06-202021-05-11Oppo广东移动通信有限公司 Electronic device, control method for electronic device, and storage medium
CN112584014A (en)*2020-12-012021-03-30苏州触达信息技术有限公司Intelligent camera, control method thereof and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5901232A (en)*1996-09-031999-05-04Gibbs; John HoSound system that determines the position of an external sound source and points a directional microphone/speaker towards it
CN101510425A (en)*2008-02-152009-08-19株式会社东芝Voice recognition apparatus and method for performing voice recognition
CN103529726A (en)*2013-09-162014-01-22四川虹微技术有限公司Intelligent switch with voice recognition function
CN103871229A (en)*2014-03-262014-06-18珠海迈科电子科技有限公司Remote controller adopting acoustic locating and control method of remote controller

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN100419454C (en)*2005-01-192008-09-17北京北阳电子技术有限公司Sound source positioning apparatus and method, electronic apparatus employing the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5901232A (en)*1996-09-031999-05-04Gibbs; John HoSound system that determines the position of an external sound source and points a directional microphone/speaker towards it
CN101510425A (en)*2008-02-152009-08-19株式会社东芝Voice recognition apparatus and method for performing voice recognition
CN103529726A (en)*2013-09-162014-01-22四川虹微技术有限公司Intelligent switch with voice recognition function
CN103871229A (en)*2014-03-262014-06-18珠海迈科电子科技有限公司Remote controller adopting acoustic locating and control method of remote controller

Also Published As

Publication numberPublication date
CN105527862A (en)2016-04-27

Similar Documents

PublicationPublication DateTitle
CN105527862B (en)A kind of information processing method and the first electronic equipment
CN112088315B (en)Multi-mode speech localization
JP7745603B2 (en) Wearable System Speech Processing
US10075791B2 (en)Networked speaker system with LED-based wireless communication and room mapping
KR101576148B1 (en)System and method for the multidimensional evaluation of gestures
US9854362B1 (en)Networked speaker system with LED-based wireless communication and object detection
US9847082B2 (en)System for modifying speech recognition and beamforming using a depth image
EP2737727B1 (en)Method and apparatus for processing audio signals
JP2007221300A (en) Robot and robot control method
Murray et al.Robotic sound-source localisation architecture using cross-correlation and recurrent neural networks
KR20220117282A (en) Audio device auto-location
CN106465012B (en)System and method for locating sound and providing real-time world coordinates using communication
CN107533134A (en) Method and system for at least detecting the position of an object in space
CN109314834A (en) Improve the perception of sound objects that mediate reality
CN113196390A (en)Perception system based on hearing and use method thereof
EP4287595A1 (en)Sound recording method and related device
US20170134853A1 (en)Compact sound location microphone
CN113491575A (en)Surgical system control based on voice commands
JP2004198656A (en) Robot audiovisual system
US9924286B1 (en)Networked speaker system with LED-based wireless communication and personal identifier
US20250259639A1 (en)Audio source separation using multi-modal audio source channalization system
TW202324372A (en)Audio system with dynamic target listening spot and ambient object interference cancelation
CN110164443A (en)Method of speech processing, device and electronic equipment for electronic equipment
WO2013091677A1 (en)Speech recognition method and system
Nguyen et al.Selection of the closest sound source for robot auditory attention in multi-source scenarios

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp