Movatterモバイル変換


[0]ホーム

URL:


CN114211486A - Robot control method, robot and storage medium - Google Patents

Robot control method, robot and storage medium
Download PDF

Info

Publication number
CN114211486A
CN114211486ACN202111518577.8ACN202111518577ACN114211486ACN 114211486 ACN114211486 ACN 114211486ACN 202111518577 ACN202111518577 ACN 202111518577ACN 114211486 ACN114211486 ACN 114211486A
Authority
CN
China
Prior art keywords
robot
information
sensor
collected
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111518577.8A
Other languages
Chinese (zh)
Other versions
CN114211486B (en
Inventor
高向阳
程俊
张锲石
康宇航
任子良
郭海光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CASfiledCriticalShenzhen Institute of Advanced Technology of CAS
Priority to CN202111518577.8ApriorityCriticalpatent/CN114211486B/en
Publication of CN114211486ApublicationCriticalpatent/CN114211486A/en
Application grantedgrantedCritical
Publication of CN114211486BpublicationCriticalpatent/CN114211486B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The application is applicable to the technical field of robots, and provides a control method of a robot, the robot and a storage medium, wherein the method comprises the following steps: acquiring first data acquired by a sensor module in the robot, wherein the first data comprises human-computer interaction information, performance information of the robot and environment information of the environment where the robot is located; obtaining a robot control strategy based on the first data, wherein the control strategy comprises at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy; and controlling the robot action based on the control strategy. According to the method and the device, at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy is obtained according to the human-computer interaction information, the performance information of the robot and the environment information of the environment where the robot is located.

Description

Robot control method, robot and storage medium
Technical Field
The present application relates to the field of robotics, and in particular, to a method for controlling a robot, and a storage medium.
Background
With the development of science and technology, robots are more and more widely used, for example, sweeping robots, meal delivery robots, and lead robots.
At present, most robots determine a control strategy according to collected information, for example, determine an interaction strategy according to interaction information of a user; or, determining a motion strategy according to the detected external environment information. The degree of intelligence of the robot for determining a control strategy according to the single information is low.
Disclosure of Invention
The embodiment of the application provides a control method of a robot, the robot and a storage medium, and can solve the problem of low intelligent degree of the robot.
In a first aspect, an embodiment of the present application provides a control method for a robot, including:
acquiring first data acquired by a sensor module in the robot, wherein the first data comprises human-computer interaction information, performance information of the robot and environmental information of an environment where the robot is located;
obtaining a control strategy of the robot based on the first data, wherein the control strategy comprises at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy;
controlling the robot action based on the control strategy.
In a second aspect, an embodiment of the present application provides a robot, including a sensor module and an information processing module, where the information processing module includes:
the data acquisition module is used for acquiring first data acquired by a sensor module in the robot, wherein the first data comprises human-computer interaction information, performance information of the robot and environmental information of the environment where the robot is located;
the strategy determining module is used for obtaining a control strategy of the robot based on the first data, wherein the control strategy comprises at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy;
and the control module is used for controlling the robot action based on the control strategy.
In a third aspect, an embodiment of the present application provides a robot, including: a sensor module, a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the control method of the robot according to any of the above first aspects when executing the computer program.
In a fourth aspect, the present invention provides a computer-readable storage medium, where a computer program is stored, where the computer program is executed by a processor to implement the control method for the robot in any one of the above first aspects.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the control method for a robot according to any one of the above first aspects.
Compared with the prior art, the embodiment of the first aspect of the application has the following beneficial effects: the method comprises the steps of firstly, acquiring first data acquired by a sensor module in the robot, wherein the first data comprises human-computer interaction information, performance information of the robot and environment information of the environment where the robot is located; obtaining a robot control strategy based on the first data, wherein the control strategy comprises at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy; and controlling the robot action based on the control strategy. According to the method and the device, at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy is obtained according to the human-computer interaction information, the performance information of the robot and the environment information of the environment where the robot is located.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic diagram of an application scenario of a control method of a robot according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a control method of a robot according to an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a method for obtaining a control strategy of a robot according to an embodiment of the present application;
fig. 4 is a flowchart illustrating a method for determining to acquire first data according to voice information according to an embodiment of the present application;
fig. 5 is a flowchart illustrating a method for determining to acquire first data according to touch information according to an embodiment of the present application;
fig. 6 is a flowchart illustrating a method for determining to acquire first data according to video information according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an information processing module according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a robot according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in the specification of this application and the appended claims, the term "if" may be interpreted contextually as "when … …" or "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The robot in this application has integrateed the sensor of multiple different grade type for the information of collection without the type, the information integrated control robot according to the different grade type, it is low to have solved the intelligent degree of robot, only goes the problem of control robot according to an information.
As shown in fig. 1, an application scenario diagram of a control method of a robot according to an embodiment of the present application is provided, and the control method of the robot may be used to control the robot. Wherein, be equipped with a plurality of sensors on the robot, a plurality of sensors include: the device comprises a microphone array, a light sensing sensor, a radio frequency sensor, a camera, a touch sensor, a limit sensor, a current sensor, an acceleration sensor, a temperature sensor, an infrared sensor and an ultrasonic sensor. The microphone array is used for collecting voice information of a user. The light sensing sensor is used for collecting light sensing information. The radio frequency sensor is used for collecting radio frequency information. The camera is used for collecting video information. The touch sensor is used for collecting touch information of a user. The limiting sensor is used for acquiring the action position information of the robot. The current sensor is used for collecting current signals of a power supply in the robot. The acceleration sensor is used for acquiring acceleration information of the robot. The temperature sensor is used for acquiring temperature information of the environment where the robot is located. The infrared sensor is used for collecting barrier information of the environment where the robot is located. The ultrasonic sensor is used for acquiring second obstacle information of the environment where the robot is located. The robot carries out feature lifting and feature fusion on information acquired by the sensor to obtain a control strategy of the robot, and corresponding human-computer interaction, obstacle avoidance and movement are carried out according to the control strategy.
Specifically, feature extraction and feature fusion are performed on first voice information acquired by a microphone array, light induction information acquired by a light induction sensor, radio frequency information acquired by a radio frequency sensor, first video information acquired by a camera, and first touch information acquired by a touch sensor.
The method comprises the steps of carrying out feature lifting and feature fusion on motion position information collected by a limit sensor, current signals collected by a current sensor, acceleration information collected by an acceleration sensor, temperature information collected by a temperature sensor, first obstacle information collected by an infrared sensor and second obstacle information collected by an ultrasonic sensor.
Fig. 2 shows a schematic flow chart of a control method of the robot provided by the present application, and a processor in the robot can be used to implement the method described below. Referring to fig. 2, the method is detailed as follows:
s101, first data collected by a sensor module in the robot are obtained.
In this embodiment, the first data includes human-computer interaction information, performance information of the robot, and environment information of an environment in which the robot is located.
The man-machine interaction information comprises instruction information sent by a user. For example, the human-computer interaction information may include a voice instruction issued by the user, and an action instruction issued by the user. The performance information of the robot may include the amount of power of a battery in the robot, the current of the battery, the angle of rotation of the robot head, the angle of motion of the robot hand, and the like. The environmental information may include obstacles around the robot, the temperature, humidity, etc. of the environment.
In the embodiment, the sensor module in the robot may include a microphone array, a light sensing sensor, a radio frequency sensor, a camera, a touch sensor, and the like for collecting human-computer interaction information.
The sensor module in the robot may include a limit sensor, a current sensor, an acceleration sensor, etc. for collecting performance information of the robot.
The sensor module in the robot may include a temperature sensor, an infrared sensor, an ultrasonic sensor, and the like for collecting environmental information. The infrared sensor and the ultrasonic sensor are used for acquiring information of the obstacles.
And S102, obtaining the robot control strategy based on the first data.
In this embodiment, the control strategy includes at least one of an interaction strategy, an obstacle avoidance strategy, and a motion strategy.
Specifically, the first data is input into a neural network model to obtain a control strategy of the robot. Before the control strategy of the robot is obtained by using the neural network model, a process of training the neural network model can be further included.
Specifically, the process of training the neural network model includes: training parameters are obtained, and the training parameters can comprise human-computer interaction information, performance information of the robot and environment information of the environment where the robot is located. And inputting the training parameters into the neural network model to be trained to obtain training result data. And comparing the training result data with preset result data to obtain difference data, and adjusting parameters in the neural network model according to the difference data. And training the neural network model after the parameters are adjusted by using the training parameters until training result data output by the neural network model meet preset requirements, and obtaining the trained neural network model.
For example, if the human-computer interaction information is an instruction from the user to rotate the head 10 degrees to the left, the performance information of the robot is that the head of the robot has rotated 30 degrees to the left. The robot may determine the interaction policy as "robot play: the head has been turned 10 degrees to the left ". The motion strategy is "head turn 10 degrees to the left". The obstacle avoidance strategy of the robot is 'no obstacle avoidance'.
If the man-machine interaction information is a straight-walking instruction sent by the user, the robot detects that the environment information is that an obstacle exists 2 meters ahead. The performance information of the robot is that the acceleration of the robot is B. The motion strategy of the robot is 'non-straight'. The obstacle avoidance strategy of the robot is 'avoiding the obstacle 2 meters ahead'. The interaction policy of the robot is' robot play: the front has obstacles, cannot go straight, has re-planned the route ".
And S103, controlling the robot to act based on the control strategy.
In this embodiment, the robot may be controlled to output corresponding response information, control the robot to move, and avoid an obstacle according to the control policy.
In the embodiment of the application, first data acquired by a sensor module in a robot is acquired, wherein the first data comprises human-computer interaction information, performance information of the robot and environment information of an environment where the robot is located; obtaining a robot control strategy based on the first data, wherein the control strategy comprises at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy; and controlling the robot action based on the control strategy. According to the method and the device, at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy is obtained according to the human-computer interaction information, the performance information of the robot and the environment information of the environment where the robot is located.
In a possible implementation manner, the human-computer interaction information includes first voice information collected by the microphone array, light-sensing information collected by the light-sensing sensor, radio-frequency information collected by the radio-frequency sensor, first video information collected by the camera, and first touch information collected by the touch sensor.
Specifically, the implementation process of step S101 may include:
s1011, acquiring first voice information acquired by the microphone array.
In this embodiment, the microphone array may collect voice information of a user, and perform denoising processing on the voice information to obtain first voice information.
By way of example, the first speech information may include: the user sends out singing, raising head, dancing and other instructions.
And S1012, acquiring light induction information collected by the light induction sensor, wherein the light induction information is used for determining an action instruction of a user.
In the present embodiment, the light sensing sensor generates light sensing information through light variation, and the light sensing information may be a level signal. The motion performed by the user, such as a hand waving motion of the user, can be identified through the light sensing information.
And S1013, acquiring the radio frequency information acquired by the radio frequency sensor.
In this embodiment, the radio frequency sensor is mainly used for acquiring information on a radio frequency chip on an object having the radio frequency chip, and the information acquired by the radio frequency sensor is recorded as radio frequency information in the present application. For example, the radio frequency sensor may obtain the contents of the book by scanning a radio frequency chip in the book, and after the robot acquires the contents of the book, the contents of the book may be displayed or played through an audio/video module.
And S1014, acquiring first video information acquired by the camera.
In this embodiment, the camera is mainly used for acquiring video information around the robot, and whether people, books, obstacles or the like exist around the robot can be determined through the video information.
And S1015, acquiring first touch information acquired by the touch sensor.
In this embodiment, the touch sensor is mainly used for collecting the touch of the user on the robot, and the robot can make corresponding actions according to the touch information, for example, the user touches the head of the robot, and the robot can make expressions of being shy, and the like.
In a possible implementation manner, the performance information includes motion position information of a component on the robot, which is associated with the limit sensor, acquired by the limit sensor, a current signal of a power supply in the robot, acquired by the current sensor, and acceleration information of the robot, acquired by the acceleration sensor.
Specifically, the implementation process of step S101 may include:
and acquiring action position information of a part on the robot, which is related to the limit sensor and acquired by the limit sensor.
And acquiring a current signal of the power supply in the robot, which is acquired by the current sensor.
And acquiring the acceleration information of the robot acquired by the acceleration sensor.
In this embodiment, the limit sensors may include a first limit sensor disposed at the robot head for acquiring the motion amplitude of the robot head, a second limit sensor disposed at the robot arm for acquiring the motion amplitude of the robot arm, a third limit sensor disposed at the robot leg for acquiring the motion amplitude of the robot leg, and the like.
As an example, the first sensor disposed at the robot head may capture an angle of the robot head moving leftward, rightward, upward, or downward.
In this embodiment, a current signal of a power supply in the robot may reflect whether a motor in the robot can normally operate, and if the current of the power supply in the robot is greater than a preset current, it indicates that the current of the motor is too large, and the motor is damaged to some extent, and the motor is in an abnormal operation state. From the current signal, a corresponding motion strategy and/or interaction strategy may be determined.
In this embodiment, the acceleration information may reflect a current time movement situation of the robot, and further, the movement strategy of the robot may be determined again according to the acceleration, the surrounding environment information, and the like.
In a possible implementation manner, the environment information includes temperature information of an environment where the robot is located, which is acquired by the temperature sensor, first obstacle information of the environment where the robot is located, which is acquired by the infrared sensor, and second obstacle information of the environment where the robot is located, which is acquired by the ultrasonic sensor.
Specifically, the implementation process of step S101 may include:
and acquiring the temperature information of the environment where the robot is located, which is acquired by the temperature sensor.
And acquiring first barrier information of the environment where the robot is located, which is acquired by the infrared sensor.
And acquiring second obstacle information of the environment where the robot is located, which is acquired by the ultrasonic sensor.
In this embodiment, the temperature information collected by the temperature sensor may generate the interaction information when the user needs to obtain the current temperature, so as to meet the user's requirement.
In this embodiment, the infrared sensor and the ultrasonic sensor may both allow a user to detect whether an obstacle exists around the robot, so that the robot can avoid the obstacle, interact with the user, and/or generate a motion strategy.
As shown in fig. 3, in a possible implementation manner, the implementation process of step S102 may include:
and S1021, performing feature extraction and feature fusion on the human-computer interaction information to obtain first fusion data.
Specifically, the human-computer interaction information is input into a feature extraction model, and feature extraction is performed on the human-computer interaction information to obtain feature data. After the feature data is obtained, the feature data may be input into the first feature fusion model to perform data combination and data fusion, so as to obtain first fusion data.
And S1022, performing feature extraction and feature fusion on the performance information and the environment information to obtain second fusion data.
Specifically, the performance information is input into a feature extraction model, and feature extraction is performed on the performance information to obtain first feature data. And inputting the environment information into the feature extraction model, and performing feature extraction on the environment information to obtain second feature data. After the first feature data and the second feature data are obtained, the first feature data and the second feature data may be input into the second feature fusion model for data combination and data fusion, so as to obtain second fusion data.
Specifically, the performance information and the environmental information may be input into the feature extraction model to perform feature extraction, so as to obtain third feature data, and the third feature data may be input into the feature fusion model to obtain second fusion data.
And S1023, obtaining a control strategy of the robot based on the first fusion data and the second fusion data.
In this embodiment, the first fusion data and the second fusion data are input into the trained neural network model to obtain the control strategy of the robot.
In the embodiment of the application, the information collected by the plurality of sensors is fused to obtain a control strategy, so that the intelligent degree of the robot can be improved.
Optionally, the implementation process of step S102 may include:
performing feature extraction and feature fusion on the human-computer interaction information to obtain first fusion data;
performing feature extraction and feature fusion on the performance information to obtain third fusion data;
performing feature extraction and feature fusion on the environmental information to obtain fourth fusion data;
and obtaining a control strategy of the robot based on the first fusion data, the third fusion data and the fourth fusion data.
Optionally, the implementation process of step S102 may include:
performing feature extraction and feature fusion on the human-computer interaction information and the performance information to obtain fifth fusion data;
performing feature extraction and feature fusion on the environmental information to obtain sixth fusion data;
and obtaining a control strategy of the robot based on the fifth fusion data and the sixth fusion data.
Optionally, the implementation process of step S102 may include:
performing feature extraction and feature fusion on the human-computer interaction information and the environment information to obtain seventh fusion data;
performing feature extraction and feature fusion on the performance information to obtain eighth fusion data;
and obtaining a control strategy of the robot based on the seventh fusion data and the eighth fusion data.
As shown in fig. 4, in a possible implementation manner, step S101 may further include:
s201, second voice information collected by the microphone array is obtained.
In this embodiment, after the robot is turned on, the microphone array collects voice information sent by the user in real time.
S202, determining whether the second voice message is matched with the awakening instruction or not based on a preset awakening instruction.
In this embodiment, after the second voice information is acquired, denoising processing may be performed on the voice information, so as to extract key information of the voice information. And matching the key information of the second voice information with a preset awakening instruction, and determining whether the robot needs to be switched from the dormant state to the working state. The preset wake-up instruction can be set as required.
S203, if the second voice information is matched with the awakening instruction, acquiring first data acquired by the sensor module.
In this embodiment, if the second voice message matches the wake-up command, the robot is switched from the sleep state to the working state. The robot can acquire first data acquired by the sensor module in a working state, and determines a control strategy according to the first data.
For example, if the key information of the second voice information is "small Q and small Q" and the preset wake-up instruction is "small Q and small Q", it is determined that the second voice information matches the preset wake-up instruction.
As shown in fig. 5, in a possible implementation manner, step S101 may further include:
s301, second touch information collected by the touch sensor is obtained, and a first duration of the second touch information is determined.
In this embodiment, after the robot is turned on, the touch sensor collects touch information of the user in real time.
S302, if the first duration is longer than a first preset time, first data collected by the sensor module are obtained.
In this embodiment, after the second touch information is acquired, the touch duration of the user may be calculated, and in this application, the touch duration is recorded as the first duration. When the touch duration is longer than the first preset time, the robot can be switched from the dormant state to the working state. The first preset time may be set as needed, for example, 4 seconds, 5 seconds, or 6 seconds, etc.
In this embodiment, if the first duration is less than or equal to the first preset time, the first data collected by the sensor module is not acquired, and the touch sensor in the robot continues to detect the touch information. The phenomenon that the robot is started due to the fact that a user touches the robot by mistake is avoided.
As shown in fig. 6, in a possible implementation manner, step S101 may further include:
s401, second video information collected by the camera is obtained, and whether face information exists in the second video information is determined.
In this embodiment, after the robot is turned on, the camera collects video information around the robot in real time. This application is denoted as second video information. After the processor acquires the second video information, the processor analyzes the second video information to determine whether face information exists in the second video information, namely whether a user exists near the robot.
Specifically, the processor may input the second video information into the detection model to determine whether the second video information has face information.
S402, if the face information exists in the second video information, determining a second duration time of the face information existing in the second video information.
In this embodiment, if the face information exists in the second video information, the duration of the face information may be calculated so as to exclude a situation where the user only passes there, but does not want to interact with the robot.
And S403, if the second duration is longer than a second preset time, acquiring first data acquired by the sensor module.
In this embodiment, if the duration of the face information is greater than the second preset time, it may be determined that the user wants to interact with the robot, and the robot may be switched from a sleep state to an operating state. The second preset time may be set as needed.
In the embodiment of the application, the triggering condition for acquiring the first data by the robot is set, and the first data is acquired when the triggering condition is met, so that the misoperation of the robot can be prevented, in addition, the data processing quantity in the robot can be reduced, and the service life of the robot is prolonged.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Corresponding to the control method of the robot described in the above embodiments, the present application provides a robot including a sensor module and an information processing module.
Referring to fig. 7, theinformation processing module 500 may include: adata acquisition module 510, apolicy determination module 520, and acontrol module 530.
Thedata acquisition module 510 is configured to acquire first data acquired by a sensor module in the robot, where the first data includes human-computer interaction information, performance information of the robot, and environment information of an environment where the robot is located;
astrategy determination module 520, configured to obtain a control strategy of the robot based on the first data, where the control strategy includes at least one of an interaction strategy, an obstacle avoidance strategy, and a motion strategy;
acontrol module 530 for controlling the robot action based on the control strategy.
In one possible implementation, the sensor module includes a microphone array, a light sensing sensor, a radio frequency sensor, a camera, and a touch sensor;
the man-machine interaction information comprises first voice information collected by the microphone array, light induction information collected by the light induction sensor, radio frequency information collected by the radio frequency sensor, first video information collected by the camera and first touch information collected by the touch sensor;
the light sensing information is used for determining action instructions of a user.
In one possible implementation, the sensor module includes: the device comprises a limit sensor, a current sensor, an acceleration sensor, a temperature sensor, an infrared sensor and an ultrasonic sensor;
the performance information comprises action position information of a part on the robot, which is related to the limit sensor, acquired by the limit sensor, current signals of a power supply in the robot, acquired by the current sensor, and acceleration information of the robot, acquired by the acceleration sensor;
the environment information comprises temperature information of the environment where the robot is located, acquired by the temperature sensor, first obstacle information of the environment where the robot is located, acquired by the infrared sensor, and second obstacle information of the environment where the robot is located, acquired by the ultrasonic sensor.
In a possible implementation manner, thepolicy determining module 520 may specifically be configured to:
performing feature extraction and feature fusion on the human-computer interaction information to obtain first fusion data;
performing feature extraction and feature fusion on the performance information and the environment information to obtain second fusion data;
and obtaining a control strategy of the robot based on the first fusion data and the second fusion data.
In a possible implementation manner, thedata obtaining module 510 may specifically be configured to:
acquiring second voice information acquired by the microphone array;
determining whether the second voice information is matched with a preset awakening instruction or not based on the preset awakening instruction;
and if the second voice information is matched with the awakening instruction, acquiring first data acquired by the sensor module.
In a possible implementation manner, thedata obtaining module 510 may specifically be configured to:
acquiring second touch information acquired by the touch sensor, and determining first duration of the second touch information;
and if the first duration is longer than a first preset time, acquiring first data acquired by the sensor module.
In a possible implementation manner, thedata obtaining module 510 may specifically be configured to:
acquiring second video information acquired by the camera, and determining whether face information exists in the second video information;
if the face information exists in the second video information, determining a second duration of the face information existing in the second video information;
and if the second duration is longer than a second preset time, acquiring first data acquired by the sensor module.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Embodiments of the present application further provide a robot, and referring to fig. 8, therobot 700 may include: a sensor module, at least oneprocessor 710, amemory 720, and a computer program stored in thememory 720 and executable on the at least oneprocessor 710, wherein theprocessor 710, when executing the computer program, implements the steps of any of the method embodiments described above, such as the steps S101 to S103 in the embodiment shown in fig. 2. Alternatively, theprocessor 710, when executing the computer program, implements the functions of the modules/units in the above-described device embodiments, such as the functions of themodules 510 to 530 shown in fig. 7.
The sensor module includes: the device comprises a microphone array, a light-sensitive sensor, a radio-frequency sensor, a camera, a touch sensor, a limit sensor, a current sensor, an acceleration sensor, a temperature sensor, an infrared sensor and an ultrasonic sensor; the microphone array is used for collecting first voice information, the light-induced sensor is used for collecting light-induced information, the radio-frequency sensor is used for collecting radio-frequency information, the camera is used for collecting first video information, and the touch sensor is used for collecting first touch information; the limiting sensor is used for acquiring action position information of a part on the robot, which is associated with the limiting sensor, the current sensor is used for acquiring a current signal of a power supply in the robot, and the acceleration sensor is used for acquiring acceleration information of the robot; the temperature sensor is used for collecting temperature information of the environment where the robot is located, the infrared sensor is used for collecting first obstacle information of the environment where the robot is located, and the ultrasonic sensor is used for collecting second obstacle information of the environment where the robot is located.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in thememory 720 and executed by theprocessor 710 to accomplish the present application. The one or more modules/units may be a series of computer program segments capable of performing certain functions, which are used to describe the execution of the computer program in therobot 700.
Those skilled in the art will appreciate that fig. 8 is merely an example of a robot and is not intended to be limiting and may include more or fewer components than shown, or some components in combination, or different components such as input output devices, network access devices, buses, etc.
TheProcessor 710 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Thememory 720 may be an internal memory unit of the robot or an external memory device of the robot, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Thememory 720 is used for storing the computer program and other programs and data required by the robot. Thememory 720 may also be used to temporarily store data that has been output or is to be output.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The control method of the robot provided by the embodiment of the application can be applied to terminal devices such as a computer, a tablet computer, a notebook computer, a netbook, a Personal Digital Assistant (PDA) and the like, and the embodiment of the application does not limit the specific types of the terminal devices.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed terminal device, apparatus and method may be implemented in other ways. For example, the above-described terminal device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical function division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the method embodiments described above when the computer program is executed by one or more processors.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the method embodiments described above when the computer program is executed by one or more processors.
Also, as a computer program product, when the computer program product runs on a terminal device, the terminal device is enabled to implement the steps in the above-mentioned method embodiments when executed.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

Translated fromChinese
1.一种机器人的控制方法,其特征在于,包括:1. a control method of a robot, is characterized in that, comprises:获取所述机器人中的传感器模块采集的第一数据,其中,所述第一数据包括人机交互信息、所述机器人的性能信息和所述机器人所在环境的环境信息;acquiring first data collected by a sensor module in the robot, wherein the first data includes human-machine interaction information, performance information of the robot, and environmental information of the environment where the robot is located;基于所述第一数据,得到所述机器人的控制策略,其中,所述控制策略包括交互策略、避障策略和运动策略中的至少一种;Based on the first data, a control strategy of the robot is obtained, wherein the control strategy includes at least one of an interaction strategy, an obstacle avoidance strategy, and a motion strategy;基于所述控制策略,控制所述机器人动作。Based on the control strategy, the robot action is controlled.2.如权利要求1所述的机器人的控制方法,其特征在于,所述传感器模块包括麦克风阵列、光感应传感器、射频传感器、摄像头和触摸传感器;2. The control method of a robot according to claim 1, wherein the sensor module comprises a microphone array, a light sensing sensor, a radio frequency sensor, a camera and a touch sensor;所述人机交互信息包括所述麦克风阵列采集的第一语音信息、所述光感应传感器采集的光感应信息、所述射频传感器采集的射频信息、所述摄像头采集的第一视频信息、所述触摸传感器采集的第一触摸信息;The human-computer interaction information includes the first voice information collected by the microphone array, the light sensing information collected by the light sensing sensor, the radio frequency information collected by the radio frequency sensor, the first video information collected by the camera, and the the first touch information collected by the touch sensor;其中,所述光感应信息用于确定用户的动作指令。Wherein, the light sensing information is used to determine the user's action instruction.3.如权利要求2所述的机器人的控制方法,其特征在于,所述传感器模块包括:限位传感器、电流传感器、加速度传感器、温度传感器、红外传感器和超声传感器;3. The control method of a robot according to claim 2, wherein the sensor module comprises: a limit sensor, a current sensor, an acceleration sensor, a temperature sensor, an infrared sensor and an ultrasonic sensor;所述性能信息包括所述限位传感器采集的所述机器人上与所述限位传感器关联的部件的动作位置信息、所述电流传感器采集的所述机器人中电源的电流信号和所述加速度传感器采集的所述机器人的加速度信息;The performance information includes the action position information of the parts on the robot associated with the limit sensor collected by the limit sensor, the current signal of the power supply in the robot collected by the current sensor, and the acceleration sensor collected. the acceleration information of the robot;所述环境信息包括所述温度传感器采集的所述机器人所在环境的温度信息、所述红外传感器采集的所述机器人所在环境的第一障碍物信息和所述超声传感器采集的所述机器人所在环境的第二障碍物信息。The environment information includes temperature information of the environment where the robot is located collected by the temperature sensor, first obstacle information of the environment where the robot is located collected by the infrared sensor, and information of the environment where the robot is located collected by the ultrasonic sensor. Second obstacle information.4.如权利要求1至3任一项所述的机器人的控制方法,其特征在于,基于所述第一数据,得到所述机器人的控制策略,包括:4. The control method of a robot according to any one of claims 1 to 3, wherein, based on the first data, a control strategy of the robot is obtained, comprising:对所述人机交互信息进行特征提取和特征融合,得到第一融合数据;Perform feature extraction and feature fusion on the human-computer interaction information to obtain first fusion data;对所述性能信息和所述环境信息进行特征提取和特征融合,得到第二融合数据;performing feature extraction and feature fusion on the performance information and the environment information to obtain second fusion data;基于所述第一融合数据和所述第二融合数据,得到所述机器人的控制策略。Based on the first fusion data and the second fusion data, a control strategy of the robot is obtained.5.如权利要求2所述的机器人的控制方法,其特征在于,所述获取所述机器人中的传感器模块采集的第一数据,包括:5. The control method of a robot according to claim 2, wherein the acquiring the first data collected by the sensor module in the robot comprises:获取所述麦克风阵列采集到的第二语音信息;obtaining the second voice information collected by the microphone array;基于预设的唤醒指令,确定所述第二语音信息是否与所述唤醒指令相匹配;determining, based on a preset wake-up command, whether the second voice information matches the wake-up command;若所述第二语音信息与所述唤醒指令相匹配,获取所述传感器模块采集的第一数据。If the second voice information matches the wake-up instruction, acquire the first data collected by the sensor module.6.如权利要求2所述的机器人的控制方法,其特征在于,所述获取所述机器人中的传感器模块采集的第一数据,包括:6. The control method of a robot according to claim 2, wherein the acquiring the first data collected by a sensor module in the robot comprises:获取所述触摸传感器采集的第二触摸信息,确定所述第二触摸信息的第一持续时间;acquiring second touch information collected by the touch sensor, and determining a first duration of the second touch information;若所述第一持续时间大于第一预设时间,获取所述传感器模块采集的第一数据。If the first duration is greater than the first preset time, the first data collected by the sensor module is acquired.7.如权利要求2所述的机器人的控制方法,其特征在于,所述获取所述机器人中的传感器模块采集的第一数据,包括:7. The control method of a robot according to claim 2, wherein the acquiring the first data collected by the sensor module in the robot comprises:获取所述摄像头采集的第二视频信息,确定所述第二视频信息中是否存在人脸信息;acquiring the second video information collected by the camera, and determining whether there is face information in the second video information;若所述第二视频信息中存在所述人脸信息,确定所述第二视频信息存在所述人脸信息的第二持续时间;If the face information exists in the second video information, determining that the second video information exists for a second duration of the face information;若所述第二持续时间大于第二预设时间,获取所述传感器模块采集的第一数据。If the second duration is greater than a second preset time, acquire the first data collected by the sensor module.8.一种机器人,包括传感器模块、存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述的机器人的控制方法。8. A robot comprising a sensor module, a memory, a processor and a computer program stored in the memory and running on the processor, characterized in that, when the processor executes the computer program, The control method of the robot according to any one of claims 1 to 7.9.如权利要求8所述的机器人,其特征在于,所述传感器模块包括:麦克风阵列、光感应传感器、射频传感器、摄像头、触摸传感器、限位传感器、电流传感器、加速度传感器、温度传感器、红外传感器和超声传感器;9. The robot of claim 8, wherein the sensor module comprises: a microphone array, a light sensor, a radio frequency sensor, a camera, a touch sensor, a limit sensor, a current sensor, an acceleration sensor, a temperature sensor, an infrared sensor sensors and ultrasonic sensors;所述麦克风阵列用于采集第一语音信息、所述光感应传感器用于采集光感应信息、所述射频传感器用于采集射频信息、所述摄像头用于采集第一视频信息、所述触摸传感器用于采集第一触摸信息;The microphone array is used to collect the first voice information, the light sensing sensor is used to collect the light sensing information, the radio frequency sensor is used to collect the radio frequency information, the camera is used to collect the first video information, and the touch sensor is used to collect the first video information. for collecting first touch information;所述限位传感器用于采集所述机器人上与所述限位传感器关联的部件的动作位置信息、所述电流传感器用于采集所述机器人中电源的电流信号、所述加速度传感器用于采集所述机器人的加速度信息;The limit sensor is used to collect the action position information of the components associated with the limit sensor on the robot, the current sensor is used to collect the current signal of the power supply in the robot, and the acceleration sensor is used to collect all the information. describe the acceleration information of the robot;所述温度传感器用于采集所述机器人所在环境的温度信息、所述红外传感器用于采集所述机器人所在环境的第一障碍物信息、所述超声传感器用于采集所述机器人所在环境的第二障碍物信息。The temperature sensor is used to collect the temperature information of the environment where the robot is located, the infrared sensor is used to collect the first obstacle information of the environment where the robot is located, and the ultrasonic sensor is used to collect the second information of the environment where the robot is located. Obstacle information.10.一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的机器人的控制方法。10. A computer-readable storage medium storing a computer program, characterized in that, when the computer program is executed by a processor, the robot according to any one of claims 1 to 7 is implemented. Control Method.
CN202111518577.8A2021-12-132021-12-13Robot control method, robot and storage mediumActiveCN114211486B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111518577.8ACN114211486B (en)2021-12-132021-12-13Robot control method, robot and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111518577.8ACN114211486B (en)2021-12-132021-12-13Robot control method, robot and storage medium

Publications (2)

Publication NumberPublication Date
CN114211486Atrue CN114211486A (en)2022-03-22
CN114211486B CN114211486B (en)2024-03-22

Family

ID=80701295

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111518577.8AActiveCN114211486B (en)2021-12-132021-12-13Robot control method, robot and storage medium

Country Status (1)

CountryLink
CN (1)CN114211486B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2024088376A1 (en)*2022-10-282024-05-02苏州科瓴精密机械科技有限公司Robot control method and device, robot, and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103064416A (en)*2012-12-102013-04-24江西洪都航空工业集团有限责任公司Indoor and outdoor autonomous navigation system for inspection robot
CN104216505A (en)*2013-05-292014-12-17腾讯科技(深圳)有限公司Control method and device of portable intelligent terminal
WO2017157302A1 (en)*2016-03-172017-09-21北京贝虎机器人技术有限公司Robot
US20190138268A1 (en)*2017-11-082019-05-09International Business Machines CorporationSensor Fusion Service to Enhance Human Computer Interactions
CN109739223A (en)*2018-12-172019-05-10中国科学院深圳先进技术研究院 Robot obstacle avoidance control method, device and terminal equipment
CN110154056A (en)*2019-06-172019-08-23常州摩本智能科技有限公司Service robot and its man-machine interaction method
US20200050173A1 (en)*2018-08-072020-02-13Embodied, Inc.Systems and methods to adapt and optimize human-machine interaction using multimodal user-feedback
CN210433409U (en)*2019-05-222020-05-01合肥师范学院Robot of sweeping floor with speech control
WO2021232933A1 (en)*2020-05-192021-11-25华为技术有限公司Safety protection method and apparatus for robot, and robot

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103064416A (en)*2012-12-102013-04-24江西洪都航空工业集团有限责任公司Indoor and outdoor autonomous navigation system for inspection robot
CN104216505A (en)*2013-05-292014-12-17腾讯科技(深圳)有限公司Control method and device of portable intelligent terminal
WO2017157302A1 (en)*2016-03-172017-09-21北京贝虎机器人技术有限公司Robot
US20190138268A1 (en)*2017-11-082019-05-09International Business Machines CorporationSensor Fusion Service to Enhance Human Computer Interactions
US20200050173A1 (en)*2018-08-072020-02-13Embodied, Inc.Systems and methods to adapt and optimize human-machine interaction using multimodal user-feedback
CN109739223A (en)*2018-12-172019-05-10中国科学院深圳先进技术研究院 Robot obstacle avoidance control method, device and terminal equipment
CN210433409U (en)*2019-05-222020-05-01合肥师范学院Robot of sweeping floor with speech control
CN110154056A (en)*2019-06-172019-08-23常州摩本智能科技有限公司Service robot and its man-machine interaction method
WO2021232933A1 (en)*2020-05-192021-11-25华为技术有限公司Safety protection method and apparatus for robot, and robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2024088376A1 (en)*2022-10-282024-05-02苏州科瓴精密机械科技有限公司Robot control method and device, robot, and storage medium

Also Published As

Publication numberPublication date
CN114211486B (en)2024-03-22

Similar Documents

PublicationPublication DateTitle
CN103890696B (en) Certified Gesture Recognition
CN109739223B (en)Robot obstacle avoidance control method and device, terminal device and storage medium
US9746929B2 (en)Gesture recognition using gesture elements
CN109992091A (en) A human-computer interaction method, device, robot and storage medium
CN107370758B (en) A login method and mobile terminal
CN108469772A (en) A control method and device for an intelligent device
CN113900577A (en)Application program control method and device, electronic equipment and storage medium
CN107707738A (en)A kind of face identification method and mobile terminal
US20200234707A1 (en)Voice interaction processing method and apparatus
CN110850982A (en)AR-based human-computer interaction learning method, system, device and storage medium
CN114211486A (en)Robot control method, robot and storage medium
CN104169858B (en) Method and device for a terminal device to recognize user gestures
CN106055959B (en)Unlocking method and mobile terminal
CN114415850A (en)Control method and device, touch control pen and computer readable storage medium
CN114564102A (en)Automobile cabin interaction method and device and vehicle
CN114283798A (en)Radio receiving method of handheld device and handheld device
CN116259314A (en) Method and device for controlling voice-controlled equipment, controlling voice-controlled equipment
CN111506183A (en)Intelligent terminal and user interaction method
CN105549894A (en)Touch information processing method and apparatus, touch information acquisition method and apparatus and touch information processing system
CN110788866A (en)Robot awakening method and device and terminal equipment
CN113448429B (en)Method and device for controlling electronic equipment based on gestures, storage medium and electronic equipment
WO2023207611A1 (en)Cleaning operation execution method and apparatus, storage medium, and electronic apparatus
WO2023197750A1 (en)Control method and apparatus for automobile charging port cover, electric vehicle, and medium
EP4439342A1 (en)Man-machine interaction method and system, and processing device
CN113613220A (en)Method for inputting instruction to terminal equipment, method and device for receiving instruction

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp