Disclosure of Invention
In order to solve the problems, the invention provides a robot control method and system based on a TOF camera module and a robot. The specific technical scheme of the invention is as follows:
a robot control method based on a TOF camera module specifically comprises the following steps: s1: the robot acquires environmental information and establishes a global map; s2: the robot plans a walking route according to the global map and walks according to the walking route; s3: the TOF camera module acquires depth information of obstacles on a robot walking route; s4: and the robot establishes a local map based on the global map and the depth information and replans a walking route according to the local map.
In one or more aspects of the present invention, the robot calibrates and optimizes a walking route by acquiring real-time environment information before walking.
In one or more aspects of the invention, when the robot walks according to the walking route, the robot acquires information of the surrounding environment, extracts feature point information of the surrounding environment for matching, detects whether the actual position of the robot is matched with the position on the global map, calibrates the position of the robot on the global map, and optimizes the global map.
In one or more aspects of the present invention, the method for the robot to create the local map in step S4 includes: the TOF camera module obtains depth information of an obstacle within a visual angle range, the detected depth information of the obstacle is represented by Z0-Zn, a global coordinate system is established by the machine, position information of the obstacle under the global coordinate of the robot is obtained according to the depth information detected by the TOF camera module and calibrated parameters of the TOF camera module, and a local map is established according to the position information and the global map.
In one or more aspects of the invention, when the robot acquires the depth information to update the route, the ground trafficability is judged according to the detected information, then the robot makes a behavior decision according to the judgment result, the robot updates the decision result into the local map by combining the depth information received at the same time, and the robot continues to re-plan the route according to the local map.
In one or more aspects of the present invention, the robot makes a behavior decision based on a rank walking method or/and an edge walking method.
In one or more aspects of the invention, when the robot walks rank, the distance between the obstacle and the robot and the model of the obstacle, which are obtained by the TOF camera module, are used to predict how long the current rank can pass, and the time for decelerating in advance when approaching the obstacle or/and the type of the obstacle ahead are predicted.
In one or more aspects of the invention, when the robot walks edgewise, the distance between the obstacle and the robot and the model of the obstacle, which are obtained by the TOF camera module, are used for judging whether the obstacle at the current edgewise is a wall surface or not, and changing an edgewise route according to the structure of the obstacle, and pre-judging the edgewise position with the best overall starting or/and pre-judging the direction with the best current edgewise. The robot calibrates and optimizes the route in real time in the walking process, so that the robot is always positioned on the optimal walking route; when the robot is interfered by an obstacle, the robot acquires the depth information of the obstacle through the TOF camera module to establish a local map and re-plan a route, so that the interference to the walking of the robot is avoided, and the anti-interference performance of the robot is enhanced.
The utility model provides a robot control system based on TOF camera module, it includes controller and the TOF camera module, information acquisition module, distance detection module, gyroscope and the odometer that link to each other with the controller, TOF camera module is used for acquireing the degree of depth information of place ahead barrier, information acquisition module is used for acquireing environmental data and establishes or revises global map, distance detection module is used for detecting the distance of robot and wall or barrier, the gyroscope is used for detecting robot pivoted angle, the odometer is used for measuring the stroke of robot. The control system of robot detects small-size barriers such as toys, electric wires through adopting TOF camera module, and then takes corresponding measure, powerful for adopting a plurality of cameras or multi-thread laser head, greatly reduced manufacturing cost moreover.
The robot control system is provided with a TOF camera module, and navigation is performed by the TOF camera module-based robot control method. When the robot walks, real-time local map information of the current robot is established, the walking route of the robot can be planned in advance according to the local map, obstacles are avoided, and the robot can work without collision.
Detailed Description
Reference will now be made in detail to the embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout.
In the description of the present invention, it should be noted that, for the terms of orientation, such as "central", "lateral", "longitudinal", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", etc., it indicates that the orientation and positional relationship shown in the drawings are based on the orientation or positional relationship shown in the drawings, and is only for the convenience of describing the present invention and simplifying the description, but does not indicate or imply that the device or element referred to must have a specific orientation, be constructed in a specific orientation, and be operated without limiting the specific scope of protection of the present invention.
Furthermore, if the terms "first" and "second" are used for descriptive purposes only, they are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features. Thus, a definition of "a first" or "a second" feature may explicitly or implicitly include one or more of the feature, and in the description of the invention, "at least" means one or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly specified or limited, the terms "assembled", "connected", and "connected" are to be construed broadly, e.g., as meaning fixedly connected, detachably connected, or integrally connected; or may be a mechanical connection; the two elements can be directly connected or connected through an intermediate medium, and the two elements can be communicated with each other. The specific meanings of the above terms in the present invention can be understood by those of ordinary skill in the art according to specific situations.
In the present invention, unless otherwise specified and limited, "above" or "below" a first feature may include the first and second features being in direct contact, and may also include the first and second features not being in direct contact but being in contact with each other through another feature therebetween. Also, the first feature being "above," "below," and "above" the second feature includes the first feature being directly above and obliquely above the second feature, or simply an elevation which indicates a level of the first feature being higher than an elevation of the second feature. The first feature being "above", "below" and "beneath" the second feature includes the first feature being directly below or obliquely below the second feature, or merely means that the first feature is at a lower level than the second feature.
The technical scheme and the beneficial effects of the invention are clearer and clearer by further describing the specific embodiment of the invention with the accompanying drawings of the specification. The embodiments described below are exemplary and are intended to be illustrative of the invention, but are not to be construed as limiting the invention.
Referring to fig. 1, a robot control method based on a TOF camera module is known, where TOF is an abbreviation of time of Flight (time of Flight) technology, that is, a sensor emits modulated near-infrared light, which is reflected after encountering an object, and the sensor converts the distance of a shot scene by calculating a time difference or a phase difference between light emission and reflection to generate depth information, and in addition, the three-dimensional contour of the object can be presented in a topographic map manner that different colors represent different distances by combining with a conventional camera for shooting, so as to obtain a three-dimensional 3D model, and the TOF camera module is a camera for acquiring data by using the TOF technology. The method specifically comprises the following steps: s1: the robot acquires environmental information and establishes a global map; s2: the robot plans a walking route according to the global map and walks according to the walking route; s3: in the walking process of the robot, the TOF camera module acquires depth information of obstacles on the walking route of the robot; s4: the robot establishes a local map based on the global map and the depth information and modifies the walking route according to the local map. When the obstacle interferes with the walking of the robot, the robot acquires the depth information of the obstacle through the TOF camera module to establish a local map and re-plan a route, so that the interference to the walking of the robot is avoided, and the anti-interference performance of the robot is enhanced.
As one embodiment, the robot calibrates or optimizes the walking route by acquiring real-time environment information and comparing the real-time environment information with a grid map in the robot before walking. When the robot walks according to the walking route, the robot acquires the information of the surrounding environment, extracts the characteristic point information of the surrounding environment for matching, detects whether the actual position of the robot is matched with the position on the global map, and then calibrates the position of the robot on the global map and optimizes the global map.
According to one implementation, the robot establishes a global coordinate system, position information of the obstacle under the global coordinate of the robot is obtained according to the current coordinate of the robot, the depth information of the obstacle detected by the TOF camera module and the related calibrated parameters of the TOF camera module, and the robot establishes a local map according to the position information. Fig. 2 is a schematic diagram of horizontal and vertical detection angles of a TOF camera module, the left diagram is the horizontal detection angle of the TOF camera module, Q1+ Q2 in the diagram represents the detection angle of the TOF camera module, Q1 is the detection angle of the TOF camera module towards the left, and Q2 is the detection angle of the TOF camera module towards the right. The right graph shows the TOF vertical detection angle, where P1+ P2 indicates the vertical detection angle of the TOF camera module, P1 indicates the upward detection angle of the TOF camera module, and P2 indicates the downward detection angle of the TOF camera module. As can be seen from FIG. 3, the depth values of the objects within the view angle range of the TOF camera module can be measured within the measurement range, and the depth information of the obstacles detected by the TOF camera module can be represented by Z0-Zn, so that the position information of the obstacles under the global coordinates of the robot can be obtained according to the current coordinates of the robot, the depth of the obstacles detected by the TOF camera module and the related calibrated parameters of the TOF camera module. Assuming that the current pose of the robot is (X0, Y0, angle0), calculating according to a formula, wherein X = (u-u0)/Ku X Z/f, and Y = (v-v0)/Kv X Z/f; where (X, Y, Z) is the coordinates of the depth information point in the camera coordinate system, and (u, v) is the pixel coordinates of the projection of the point (X, Y, Z) onto the imaging plane. Because the depth camera captures a depth image, the gray value of each pixel point in the image represents the depth value from the position in the environment where the pixel point is located to the imaging plane of the sensor, namely the value of Z is the value detected by the TOF camera module. 1/Ku and 1/Kv respectively represent the width and height of a pixel, f represents the focal length of the module, u0 and v0 represent the coordinates of the center point of an image, and the five parameters can be obtained by performing internal reference calibration on the TOF camera module. The relative coordinates (X, Y) of the obstacle with respect to the TOF camera module can be determined by the above formula. The position of the obstacle in the global coordinates of the robot can be determined by combining the current coordinates (X0, Y0) of the machine: x' = X0+ X; y' = Y0+ Y; furthermore, the rotation angle a = atan2(X '-X, Y' -Y) of the obstacle relative to the robot can be obtained from (X ', Y') and the current coordinate information of the robot, and the angle information of the obstacle in the global coordinate can be obtained: a ' = a + angle0, so the final obstacle position coordinate is (X0, Y0, a '), the robot builds a local map according to the obstacle position coordinate (X0, Y0, a '), combined with the global map, and the robot replans the route according to the local map. The robot calibrates and optimizes the route in real time in the walking process, so that the robot is always on the optimal walking route.
As one implementation, as can be seen from fig. 4, when the robot acquires depth information to update a route, it first determines ground trafficability according to the detected information, then the robot makes a behavior decision according to the determination result, the robot updates the decision result into a local map in combination with the received depth information, and the robot continues to re-plan the route according to the local map. The robot carries out behavior decision based on rank walking method or/and edgewise walking method, rank walking is the straight line bow-shaped walking of the common walking robot; the edge walking is that the robot takes the edge of the wall as the basis and walks along the wall. When the robot walks through rank, the distance between the obstacle and the robot and the model of the obstacle obtained by the TOF camera module are utilized, and the robot meets the corresponding environment by combining the rank walking rule, so that the decision behavior can be optimized. The robot pre-judges how long the current rank can pass according to a local map, a walking route and a walking speed; the robot pre-judges the time of early deceleration when approaching the obstacle according to the distance from the obstacle and the walking speed; the robot pre-judges the type of the front obstacle according to the depth information and the walking route, and decides whether the front obstacle is turned back in advance along the edge to the next rank or close to the obstacle or/and is used for judging whether a left and right missed-scanning area exists in the current rank, and the leakage is timely repaired. When the robot walks along the edge, the distance between the obstacle and the robot and the model of the obstacle, which are obtained by the TOF camera module, are utilized, and the robot meets the corresponding environment by combining the rules of walking along the edge, so that the decision behavior can be optimized. The robot judges whether the current obstacle along the edge is a wall surface or not according to the depth information and the local map and changes an edge route according to the obstacle structure; the robot compares the local map and the global map to predict an edgewise position that is globally starting best or/and to predict a currently best edgewise direction. The machine adopts the two methods to plan the route, so that the walking route of the robot is neat and ordered.
Referring to fig. 5, the robot control system based on the TOF camera module comprises a controller, a TOF camera module, an information acquisition module, a distance detection module, a gyroscope and a odometer, wherein the TOF camera module, the information acquisition module, the distance detection module, the gyroscope and the odometer are connected with the controller, the TOF camera module is used for acquiring depth information of a front obstacle, the information acquisition module is used for acquiring environmental data to establish or modify a global map, the distance detection module is used for detecting the distance between the robot and a wall surface or an obstacle, the gyroscope is used for detecting the rotation angle of the robot, and the odometer is used for measuring the travel of the robot. The information acquisition module is a camera module or a laser head. The laser head is a single line laser head. The distance detection module at least comprises an ultrasonic distance sensor, an infrared intensity detection sensor, an infrared distance sensor, a physical switch detection collision sensor, a capacitance change detection sensor or a resistance change detection sensor. The camera module is used for matching characteristic point information of the surrounding environment in advance by shooting pictures of the periphery of the robot, observing whether the robot is located at the same position on a planned route, and further optimizing the position of the robot, so that a wrong map of the robot is optimized and calibrated; or a single line laser rangefinder may be used to scan the environment ahead of time and for map correction and repositioning before sweeping. And the production cost is low compared with that of a multi-line laser head. The machine provided by the invention is additionally provided with a plane TOF on the basis of a laser SLAM or a vision SLAM, and is used for detecting obstacles to realize a collision-free operation function.
The robot control system is provided with a TOF camera module, and navigation is performed by the TOF camera module-based robot control method. The TOF camera module 2 is located in front of the robotmain body 1, and theinformation acquisition module 3 is located in the middle area of the robot. Before the robot walks, a global map is established, a route is planned according to the global map, when the robot walks, real-time local map information of the current robot is established according to the global map and detection information of a TOF camera module, the walking route of the robot can be planned and modified in advance according to the local map, obstacles are avoided, and the robot does not collide.
In the description of the specification, reference to the description of "one embodiment", "preferably", "an example", "a specific example" or "some examples", etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention, and schematic representations of the terms in this specification do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. The connection mode connected in the description of the specification has obvious effects and practical effectiveness.
With the above structure and principle in mind, those skilled in the art should understand that the present invention is not limited to the above embodiments, and modifications and substitutions based on the known technology in the field are within the scope of the present invention, which should be limited by the claims.