Movatterモバイル変換


[0]ホーム

URL:


CN113516715A - Target area inputting method and device, storage medium, chip and robot - Google Patents

Target area inputting method and device, storage medium, chip and robot
Download PDF

Info

Publication number
CN113516715A
CN113516715ACN202110801460.4ACN202110801460ACN113516715ACN 113516715 ACN113516715 ACN 113516715ACN 202110801460 ACN202110801460 ACN 202110801460ACN 113516715 ACN113516715 ACN 113516715A
Authority
CN
China
Prior art keywords
robot
sampling
target area
point
pose data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110801460.4A
Other languages
Chinese (zh)
Inventor
卜大鹏
郑凯林
霍峰
秦宝星
程昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Gaussian Automation Technology Development Co Ltd
Original Assignee
Shanghai Gaussian Automation Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Gaussian Automation Technology Development Co LtdfiledCriticalShanghai Gaussian Automation Technology Development Co Ltd
Priority to CN202110801460.4ApriorityCriticalpatent/CN113516715A/en
Publication of CN113516715ApublicationCriticalpatent/CN113516715A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The application discloses a target area input method and device, a storage medium, a chip and a robot. The method comprises the following steps: receiving a target area input instruction, and determining the current position of the robot in a preset map; moving along a specific route based on external driving and with the current position as an initial position; in the process that the robot moves along a specific route, periodically sampling points in the route and recording pose data of sampling points; and storing and outputting the marking line formed by the sampling points synchronously with the sampling, and recording the marking line according to the received recording confirmation instruction. By adopting the scheme of the application, the accurate identification of the target area on the map can be realized, the determined target area has higher precision, high recording efficiency and simple and convenient operation.

Description

Target area inputting method and device, storage medium, chip and robot
Technical Field
The application relates to the technical field of robots, in particular to a target area input method, a target area input device, a storage medium, a chip and a robot.
Background
With the development of artificial intelligence technology, various intelligent robots appear, such as floor sweeping robots, floor mopping robots, dust collectors, weeding machines, and the like. The cleaning robots can automatically identify surrounding obstacles in the working process and perform obstacle avoidance operation on the obstacles, and the cleaning robots not only liberate labor force, save labor cost, but also improve cleaning efficiency.
At present, when a cleaning robot carries out cleaning operation according to a navigation map, abnormal phenomena that the robot carries out mistaken cleaning operation on a carpet area, the robot collides with glass, the robot falls off from a stair or a passageway and the like often occur on a working site. Therefore, there is a need to address the problems of the prior art.
Disclosure of Invention
In view of the above, the invention provides a target area entry method, a target area entry device, a storage medium, a chip and a robot, so that the target area can be accurately marked on a navigation map of the robot, and therefore, the problems that the robot performs an error cleaning operation on a carpet area, and the robot bumps into glass or falls off and the like in a working site are solved.
According to an aspect of the application, there is provided a target area entry method, the method being performed by a robot, the method comprising: receiving a target area input instruction, and determining the current position of the robot in a preset map; moving along a specific route based on external driving and with the current position as an initial position; periodically sampling points in the route, and recording pose data of sampling points; and outputting a marking line formed by the sampling points synchronously with the sampling and recording the marking line according to the received recording confirmation instruction.
Optionally, a user of the robot drives the robot to move along the particular route determined by the user, wherein the particular route is associated with the target area.
Optionally, before receiving the target area entry instruction and then determining the current position of the robot in the preset map, the method further comprises: in response to a trigger of a user, the robot patrols a preset space determined by the user to generate the map, wherein the map is a grid map of the preset space.
Optionally, the step of moving along a specific route based on the external driving and with the current position as the initial position comprises: determining a marking reference point and an actual reference point, the distance between which and the marking reference point meets the preset offset, according to the selection of a user; taking an actual reference point corresponding to the robot at the current position as an initial point to drive the actual reference point to move along the specific route; wherein the mark reference point is a feature point representing an outer contour of the robot.
Optionally, the step of periodically sampling points in the route and recording pose data of the sampling points comprises: acquiring pose data of the reference points corresponding to each sampling point in the map; calculating the pose data of each sampling point on the route in the map according to a preset static coordinate conversion relation and the pose data of a reference point corresponding to each sampling point in the map; wherein the reference point is a front wheel pivot center of the robot.
Optionally, an offset of the actual reference point relative to the marked reference point is smaller than a preset range.
Optionally, the step of periodically sampling points in the route and recording pose data of the sampling points further includes: acquiring pose data of the marked reference point at the initial position; calculating pose data of each sampling point on the route in the map during the movement of the robot along the specific route; and if the difference value between the pose data of one sampling point and the pose data of the previous sampling point is greater than a first preset threshold value, taking the sampling point as an effective sampling point and recording the pose data of the effective sampling point.
Optionally, the outputting a mark line composed of the sampling points in synchronization with the sampling and recording the mark line according to the received recording confirmation instruction includes: and sequentially connecting each effective sampling point to form the mark line, and then performing smooth filtering processing on the mark line and outputting the filtered mark line.
Optionally, the step of outputting a mark line constituted by the sampling points in synchronization with the sampling and recording the mark line according to the received recording confirmation instruction further includes: if the difference value between the pose data of the previous sampling point of one sampling point and the pose data of the initial point is larger than a second preset threshold value and the difference value between the pose data of the sampling point and the pose data of the initial point is smaller than a third preset threshold value, the sampling point and the initial point are connected based on a preset rule so that a marking line formed by the sampling points forms a closed loop, wherein the second preset threshold value is equal to the third preset threshold value.
According to yet another aspect of the present application, there is provided a target area entry device for a robot, the device comprising: the device comprises a positioning module, a motion execution module, a recording module and an output module;
the positioning module is used for receiving a target area input instruction and determining the current position of the robot in a preset map;
the motion execution module is used for moving along a specific route based on external driving and taking the current position as an initial position;
the recording module is used for periodically sampling points in the route and recording pose data of the sampling points; and
and the output module is used for outputting the mark line formed by the sampling points synchronously with the sampling and recording the mark line according to the received recording confirmation instruction.
According to a further aspect of the present application, there is provided a storage medium storing a computer program which can be loaded by a processor to perform the steps of any of the target area entry methods described above.
According to yet another aspect of the present application, there is provided a chip comprising at least one processor and an interface; wherein the interface is configured to provide the at least one processor with program instructions or data; the at least one processor is configured to execute the program line instructions to perform the steps of any of the target area entry methods described above.
According to yet another aspect of the present application, there is provided a robot comprising a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the steps of any of the target area entry methods described above.
The target area recording method, the target area recording device, the storage medium, the chip and the robot can complete the marking of the target area on the map according to the recorded marking line so as to achieve the function of accurately recording the target area, realize the accurate positioning marking of the outline of the target area on the navigation map, and ensure that the determined target area has higher precision, high recording efficiency and simple and convenient operation.
Drawings
The technical solution and other advantages of the present application will become apparent from the detailed description of the embodiments of the present application with reference to the accompanying drawings.
FIG. 1 is a flow chart of a target area entry method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a robot outer contour feature point selection according to an embodiment of the application;
FIG. 3 is a schematic diagram of a robot outer contour feature point selection according to an embodiment of the present application;
fig. 4 is a schematic diagram of a robot moving along a target area to be measured and recording sampling points according to an embodiment of the application;
FIG. 5 is a schematic sub-flow chart of step S200 shown in FIG. 1;
FIG. 6 is a schematic sub-flow chart of step S400 shown in FIG. 1;
FIG. 7 is a schematic diagram of a target area entry device according to an embodiment of the present application;
8a-8f are application scenario diagrams of a target area entry method according to an embodiment of the application;
FIG. 9 is a schematic diagram of a chip according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a robot provided according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second" and "first" are used herein for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; may be mechanically connected, may be electrically connected or may be in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
The following disclosure provides many different embodiments or examples for implementing different features of the application. In order to simplify the disclosure of the present application, specific example components and arrangements are described below. Of course, they are merely examples and are not intended to limit the present application. Moreover, the present application may repeat reference numerals and/or letters in the various examples, such repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
In view of the problems mentioned in the background art, the inventor of the present invention has found that although the existing robot is equipped with sensing devices such as laser radar and camera, the sensing devices still have a certain height from the ground, so that abnormal information on the ground cannot be accurately scanned and identified, such as carpet, pit, slope, stairway opening and other special areas, and in addition, for glass wall and other special areas, because the light transmittance of the glass is good, the transmitted light emitted by the laser easily penetrates through the glass, and the distance between the robot and the glass wall cannot be accurately sensed through the received reflected light. Since information of a specific area such as a floor and a glass wall cannot be accurately sensed, the robot cannot mark the specific area on a navigation map, and if the specific area is not marked, the robot passes through the specific area or performs cleaning operation on the specific area. Therefore, the above causes abnormal phenomena such as incorrect cleaning operation of the robot to the carpet area, collision of the robot with the glass, and falling of the robot from the stairs or the passageway occur on the site.
In order to avoid the occurrence of these abnormal phenomena, it is desirable to accurately mark information of these particular areas on a navigation map. However, since the sensing device carried by the robot cannot accurately scan and identify the special areas, the user can only manually draw the special areas on the terminal Application (APP) in a manual drawing manner, and add the corresponding marks of the special areas to the navigation map according to the special areas manually drawn by the user to construct the cleaning map. If the special area is marked as a virtual wall, the robot cannot pass through the area, on one hand, the defect that the robot cannot sense and detect the obstacle can be overcome, and on the other hand, the robot cannot work in the area by setting a special forbidden zone. However, in the process of hand-drawing on the APP by the user, the special area manually drawn often has a large deviation from reality due to the fact that reference comparison is not available or the distance between the special area manually drawn and the real scene is inaccurate, and the special area manually drawn has poor flexibility and cannot be used for drawing the mark with the same shape and position as the real scene. Therefore, the robot may perform an erroneous cleaning operation on the carpet area, collide with the glass, and fall off from the stairs or the passageway.
In view of the above, the present invention is to accurately mark information of these target areas (special areas) on a navigation map, so as to avoid abnormal problems such as incorrect cleaning operation of a carpet area by a robot, glass collision or falling of the robot in a working site. Therefore, it becomes important to develop an entry method that can realize accurate marking of a target area (special area) on a navigation map.
Fig. 1 is a flowchart of a target area entry method according to an embodiment of the present application, and referring to fig. 1, an embodiment of the present application provides a target area entry method, which is performed by a robot, and includes: step S100, receiving a target area input instruction, and determining the current position of the robot in a preset map; step S200, based on external driving and taking the current position as an initial position to move along a specific route; step S300, periodically sampling points in the route, and recording pose data of the sampling points; and S400, outputting a mark line formed by the sampling points synchronously with the sampling and recording the mark line according to the received recording confirmation instruction. Wherein the particular route is associated with the target area.
In this embodiment, the robot may be, for example, an intelligent cleaning robot. In some other embodiments, the robot may also be a robot with other specific functions, such as a delivery robot, a navigation robot, etc.
In step S100, the robot receives a target area entry command, And then determines a current position of the robot in a preset map, And when the robot is in operation, it senses whether an obstacle exists through its own sensing device, such as a laser radar, a camera, And the like, And in addition, the robot may adopt a Simultaneous Localization And Mapping (SLAM) technique to achieve autonomous movement And Localization.
The robot can obtain the three-dimensional coordinate data of the current scene in real time through a positioning device carried by the robot, the three-dimensional coordinate data is a coordinate under a first coordinate system, wherein the first coordinate system can be a laser coordinate system of the robot, since the laser coordinate system of the robot is constantly changed along with the movement of the robot, for the convenience of data processing, the three-dimensional coordinate data needs to be converted into the same static coordinate system (namely, a second coordinate system), the second coordinate system can be a world coordinate system, the premise that the accurate position of the robot in the world coordinate system is a target area for accurate recording of the robot is obtained, the positioning data can be sent at a frequency of 10Hz based on a positioning interface provided by SLAM, exemplarily, the positioning data can be pose data of the current position of the center of a front wheel rotation axis of the robot in the world coordinate system, for example, a PNC on the robot subscribes data conforming to a ross topic communication form, and determining whether the robot completes accurate alignment.
In step S200, the robot moves along a specific route based on external driving with the current position as an initial position; wherein the specific route is associated with the target area, namely the robot walks along the edge of the target area according to the specific route near the target area, and the target area can be a virtual wall, glass, a slope area, a carpet area, a speed bump, a display area, a stair area, an elevator area and other special areas. For example, in this embodiment, after the robot is successfully positioned at a certain position, the user of the robot manually drives the robot to move to the starting point to be recorded of the target area, the robot then determines the pose data of the starting point to be recorded of the target area in the world coordinate system based on the trigger of the user, and then the user drives the robot to move along a specific route (i.e. move along the edge of the target area) with the starting point to be recorded as the initial position. Illustratively, in the embodiment, a user of the robot directly and manually drives the robot to move to a starting point to be recorded of a target area as a current position, the robot then performs a positioning operation based on the trigger of the user and determines pose data of the current position in a world coordinate system, and then the user drives the robot to move along a specific route (i.e. move along the edge of the target area) with the current position as an initial position.
In step S300, periodically sampling points in the route, and recording pose data of the sampling points; in this step, during the process that the robot moves along a specific route around the target area, based on a certain sampling repetition period, the pose data of the robot at the corresponding position of the current frame (i.e. the current sampling point) is periodically acquired, and the pose data is recorded in a corresponding data structure.
In step S400, a marker line constituted by the sampling points is output in synchronization with the sampling and is entered according to the received entry confirmation instruction. The robot periodically samples and acquires pose data of a plurality of sampling points in the process of moving along a specific route, records the pose data of each sampling point in a corresponding data structure, and can be automatically connected into a line to form a marking line when the plurality of sampling points are output to a map for display. Therefore, the robot can periodically sample while moving in a specific route to acquire the pose data of a plurality of sampling points, store and record the pose data of the plurality of sampling points, simultaneously output a marking line formed by the sampling points synchronously with the sampling, and record the marking line according to a received recording confirmation instruction, so as to realize the function of accurately recording a target area. In this embodiment, the marking of the target area on the map is completed according to the marking line, that is, the marking is a corresponding special area, for example, a virtual wall, which indicates that the robot prohibits the passage in this area, so that a marking mode that the user manually draws the target area on the APP is replaced, so as to realize the accurate identification of the target area information on the map, obtain an accurate target area marking result, and generate a cleaning map, so that the robot can plan a moving path or a cleaning task more accurately.
According to the technical scheme, the problem that the target area cannot be marked on the map due to the fact that the robot cannot sense and identify the target area can be solved, the problem that the deviation between the target area drawn by the user manually through the APP and the actual site is too large can be solved, and the accuracy and the flexibility of marking the target area on the map are improved.
For example, the user of the robot may determine a target area to be marked as required, then drive the robot or move the robot to the vicinity of the target area, and control the robot to move according to a specific route (for example, move along the edge of the target area) in the vicinity of the target area, and at the same time, periodically sample points in the route and record pose data of the sampled points to acquire a mark line of the target area to be marked.
Optionally, before step S100, the method further comprises: in response to a trigger of a user, the robot patrols a preset space determined by the user to generate the map, wherein the map is a grid map of the preset space.
In particular, the grid map is also called a cost map (cost map), and can be used for realizing navigation of the robot. The map construction method can be applied to Application programs (APP) on terminal equipment, and the terminal equipment can be intelligent equipment such as a mobile phone and a tablet personal computer and can communicate with a cleaning robot. The grid map of the preset space in the above steps may be a point cloud map which is obtained by the robot inspecting the preset space determined by the user and is constructed in advance according to the 3D laser SLAM algorithm, or may be a map which is obtained from a network or input by the user through an application program, wherein the map constructed by the application program may be visually displayed on the intelligent device. The map in the above step may be a map of an area designated by the user, or may be a map of all the areas constructed.
Specifically, in step S200, the step of moving along a specific route based on the external driving and with the current position as the initial position includes: determining a marking reference point and an actual reference point, the distance between which and the marking reference point meets the preset offset, according to the selection of a user; taking an actual reference point corresponding to the robot at the current position as an initial point to drive the actual reference point to move along the specific route; wherein the mark reference point is a feature point representing an outer contour of the robot.
In this embodiment, a certain reference point on the outer contour of the robot may be used as a marking reference point according to the shape and position of the target area, and the rule of selection is; the method is characterized in that points with obvious characteristics on the outer contour of the robot are selected as far as possible, for example, for a cleaning robot, the water absorption scrabbles are arranged on the outer side of the contour of the robot, so that the robot is conveniently driven to observe and mark when a target area is marked, and therefore, points on the edge of the water absorption scrabbles on the left side or the right side of the robot are preferably selected as marking reference points to record.
Fig. 2 is a schematic diagram of robot outer contour feature point selection according to an embodiment of the present application, and fig. 3 is a schematic diagram of robot outer contour feature point selection according to another embodiment of the present application; as shown in fig. 2, a point at the edge of theleft side rake 30 of therobot 1000 is selected as a mark reference point, as shown in fig. 3, a point at the edge of theright side rake 40 of therobot 1000 is selected as a mark reference point, and the reason why the point at the edge of the rake is selected as a mark reference point is that the rake is located at the outermost side of the body of therobot 1000, so that therobot 1000 can observe and align a target area to be recorded when recording a sampling point. When thepoint 30 at the edge of the left side of the robot is selected as the mark reference point, for example, a point whose distance from thepoint 30 at the edge of the left side of the robot meets a preset offset amount may also be selected as an actual reference point, and the actual reference point corresponding to the current position of the robot is used as an initial point to drive the actual reference point to move along the specific route.
In step S300, the step of periodically sampling points in the route and recording pose data of sampling points includes: acquiring pose data of the reference points corresponding to each sampling point in the map; calculating the pose data of each sampling point on the route in the map according to a preset static coordinate conversion relation and the pose data of a reference point corresponding to each sampling point in the map; wherein the reference point is a front wheel pivot center of the robot.
In this embodiment, the robot may cyclically read the positioning information fused by the odometer and the laser radar to obtain the pose data of the robot front wheel rotation axis center (steering _ link) in the world coordinate system, where the pose data specifically includes the position coordinates (x, y) and the attitude (direction angle theta), and therefore, the robot front wheel rotation axis center is used as the reference point. Generally, a tf coordinate system transformation tree is maintained between the center of the front wheel pivot of therobot 1000 and the world coordinate system, that is, a tree structure is used to maintain coordinate transformation between a plurality of coordinate systems according to time variation, and through the tf coordinate system transformation tree, the transformation relationship between any two coordinate systems can be obtained. A fixed coordinate conversion relationship, also called as a static tf coordinate conversion relationship, exists between the left waterabsorption rake edge 30 positioned on the outermost side of the machine body, the right waterabsorption rake edge 40 positioned on the outermost side of the machine body and the center of the front wheel rotating shaft of therobot 1000. Illustratively, in therobot 1000, the translation transformation relationship between the leftsuction rake edge 30 and the center of the front wheel rotation axis of the robot is (-0.15, 0.48), so that the pose data of the leftsuction rake edge 30 serving as a marking reference point in the map is obtained. The translation conversion relation between theedge 40 of the right suction rake and the center of the front wheel rotating shaft of the robot is (-0.15, -0.48), and the pose data of the rightsuction rake edge 40 serving as a marking reference point in the map is obtained. It should be noted that the specific coordinate value data of the translation transformation relationship may be obtained by geometric mapping of a real physical model of the robot.
In order to increase the accuracy of therobot 1000 in obtaining the pose data of the sampling point and improve the accuracy of recording the target area on the map, it is preferable that an actual reference point, the distance between which and the marking reference point satisfies a preset offset, is selected as a recording point for recording, for example, when the target area is a glass wall, a certain preset distance is kept between the marking reference point on the robot and the glass wall to prevent the robot from scratching the glass wall during the movement process, and when the actual reference point corresponds to the initial point of the target area, the robot periodically samples the points in the route and records the pose data of the sampling point while moving along a specific route in response to the triggering of a user.
In an alternative embodiment, the initial position is an initial position, and the initial position may be a position where therobot 1000 starts to enter a designated target area, when therobot 1000 reaches the initial position, therobot 1000 starts to perform a recording operation on the target area in response to a trigger of the user, that is, therobot 1000 periodically samples points in a specific route while moving along the route, and records pose data of the sampled points. If the target region is a linear region, any one of the two endpoints of the linear region may be selected as the initial position, and if the target region is a closed region, any one of the two endpoints of the closed region that satisfies the condition may be selected as the initial position.
When theleft side edge 30 or theright side edge 40 of therobot 1000 is selected as the marking reference point, the user can make a special area adjustment so that therobot 1000 can move more smoothly during the movement along a specific route. Optionally, a certain movement path offset may be set for the robot, and for example, the offset is controlled within ± 5cm, that is, an actual reference point corresponding to the robot at the current position is taken as an initial point to drive the actual reference point to move along the specific route, and the maximum allowable offset is within ± 5cm, so as to ensure the accuracy of the target area to be recorded on the map. It should be noted that before therobot 1000 starts to record the sampling of the actual reference point in response to the trigger of the user, it is necessary to confirm whether the left marked reference point or the right marked reference point of therobot 1000 is selected to be transmitted, and it is necessary to distinguish the virtual wall from other special areas because the virtual wall is a linear area and the other special areas are closed areas.
Exemplarily, fig. 4 is a schematic diagram of a robot moving along a target area to be measured and recording a sampling point according to an embodiment of the present application, as shown in fig. 4, a right side waterabsorption rake edge 40 of therobot 1000 is selected as a mark reference point (in the present embodiment, the mark reference point is also an actual reference point), when the right side waterabsorption rake edge 40 reaches an initial point position of a target area 2000 (such as a virtual wall), a user clicks to start recording, and at the same time, therobot 1000 moves along a specific route (a direction indicated by a dotted arrow in fig. 4) determined by the user in response to a trigger of the user, periodically samples points in the route, and records pose data of the sampling point.
Fig. 5 is a schematic sub-flow diagram of step S200 shown in fig. 1, and as shown in fig. 5, when a recording operation is started, therobot 1000 moves along a specific route, and the step of periodically sampling points in the route and recording pose data of sampling points further includes: step S210, acquiring pose data of the marking reference point at the initial position; step S220, calculating the pose data of each sampling point on the route in the map in the process of moving the robot along the specific route; step S230, if the difference between the pose data of one sampling point and the pose data of the previous sampling point is greater than a first preset threshold, taking the sampling point as an effective sampling point and recording the pose data of the effective sampling point.
In step S210, the pose data of the center of the front wheel pivot of the robot may be obtained in real time through the location information fused by the odometer and the laser radar, and then the pose data of the robot using the left or right edge of the leading edge as the marking reference point is calculated through the static tf coordinate transformation relationship.
In step S220, calculating pose data of each sampling point on the route in the map during the movement of the robot along the specific route; because the pose data of the marking reference point are sampled and recorded periodically, each frame is each period, for example, when the first frame is used, the pose data of the marking reference point (or the actual reference point) at the corresponding initial position is obtained and recorded as the pose data of the sampling point, then each period is calculated and obtained based on the SLAM technology to achieve the purpose that the pose data of the marking reference point (or the actual reference point) at the current position of the robot is recorded and stored in real time in the motion process of the robot along a specific route.
In step S230, if the difference between the pose data of one sampling point and the pose data of the previous sampling point is greater than a first preset threshold, the sampling point is used as an effective sampling point and the pose data of the effective sampling point is recorded. The first preset threshold may be, for example, 0.1m, and a difference between the pose data of one sampling point and the pose data of the previous sampling point may be obtained by calculating a euclidean distance between two adjacent sampling points, where the euclidean distance refers to a linear distance between two coordinate points corresponding to the two sampling points one to one on a coordinate system, and a distance between the two coordinate points on the coordinate system may be calculated by using a pythagorean theorem. If the target area to be recorded is a linear area, a pause signal can be started when the marking reference point reaches the end position of the target area, the robot responds to the pause signal to stop moving, and meanwhile, the robot can also stop periodically sampling and recording actions.
Fig. 6 is a schematic sub-flow diagram of step S400 shown in fig. 1, and as shown in fig. 6, after the recording operation is started, the robot periodically samples points in a specific route while moving along the route based on external driving, records pose data of the sampling points, and simultaneously outputs a mark line formed by the sampling points in synchronization with the sampling and records the mark line according to a received recording confirmation instruction, the step specifically includes: and S410, sequentially connecting each effective sampling point to form the mark line, and then performing smooth filtering processing on the mark line and outputting the filtered mark line. In step S410, the valid sampling points may be sequentially connected based on an existing algorithm in cooperation with a cost map (cost map) to form the mark line, and the mark line may be visually displayed on the cost map and may be synchronously uploaded to an application program of the mobile terminal. Specifically, the pose data corresponding to all the effective sampling points can be stored in a data structure, then a program can be started in the data structure to call the recorded pose data of the effective sampling points, if the target area to be recorded is a linear area, a user can manually connect the effective sampling points into line segments and store the line segments, and certainly, the effective sampling points can be automatically connected into line segments by using corresponding software and stored; if the target area to be recorded is a closed area, the robot can automatically connect the effective sampling points into line segments and store the line segments after closing the effective sampling points based on the closing condition.
In addition, if the recorded mark line has obvious jitter, smooth filtering processing can be carried out, and the filtered mark line can be output. Specifically, the method comprises the following steps: the method includes the steps that points on a specific route are sampled periodically based on the movement process of the robot along the specific route, traversal is conducted from the 2 nd sampling point on the route to the 2 nd sampling point on the route, n is conducted from 2, the abscissa of each sampling point is updated in sequence, the abscissa of the (the abscissa of the previous sampling point + the abscissa of the next sampling point) 0.5-the abscissa of the nth sampling point is assigned to the abscissa of the nth sampling point, traversal is conducted from the 2 nd point on the route from back to front, the abscissa of each sampling point is updated in sequence, new abscissas of each sampling point is obtained, and the updating of the abscissas of all the sampling points is completed. The updating manner of the ordinate is also the same, and is not described herein again. By performing smoothing filtering processing on the recorded marking lines, a series of smooth marking lines can be obtained.
In some embodiments, if the target area to be recorded is not a linear area but a closed area, it is difficult for the manually driven robot to ensure that the connection line of the effective sampling points obtained by the periodic sampling just can be closed. Continuing as shown in fig. 6, if the target area to be recorded is a closed area, after the recording operation starts, the step of outputting the mark line formed by the sampling points synchronously with the sampling and recording the mark line according to the received recording confirmation instruction further includes: step S420, if the difference between the pose data of the previous sampling point of a sampling point and the pose data of the initial point is greater than a second preset threshold and the difference between the pose data of the sampling point and the pose data of the initial point is less than a third preset threshold, connecting the sampling point and the initial point based on a predetermined rule so that a mark line formed by the sampling points forms a closed loop. Similarly, the difference between the pose data of the previous sampling point of one sampling point and the pose data of the initial point can be obtained by calculating the euclidean distance between the two sampling points, wherein the euclidean distance refers to the linear distance between two coordinate points corresponding to the two sampling points one by one on a coordinate system, and the distance between the two coordinate points on the coordinate system can be calculated by using the pythagorean theorem.
In step S420, for example, if a difference between pose data of a sampling point previous to the sampling point and the pose data of the initial point is greater than a second preset threshold, and the difference between the pose data of the sampling point and the pose data of the initial point is less than a third preset threshold, a bezier curve may be used to connect the pose data of the sampling point corresponding to the current position and the pose data of the sampling point corresponding to the initial point on a coordinate system, so that a transition of a mark line between two sampling points close to a closed loop is smooth to form a closed loop, thereby constructing a mark pattern of a closed target area on the map, and implementing accurate entry of the closed target area on the map. Meanwhile, in the process that the robot moves along a specific route, even if the robot continues to cyclically and periodically sample points in the route by path points exceeding a closed loop, a mark line formed by recorded sampling points does not change greatly. It should be noted that the second preset threshold and the third preset threshold are equal to each other to serve as a condition for forming the closed loop, for example, the second preset threshold and the third preset threshold may be both set to be 0.3m, and in addition, the sizes of the second preset threshold and the third preset threshold may be set or adjusted according to corresponding parameters according to actual situations, which is not limited herein in the embodiment of the present invention.
The target area recording method for the robot provided by the embodiment of the invention can realize the function of accurately recording the target area, ensures the accurate marking of the target area on the navigation map, and has the advantages of higher accuracy of the determined target area, high recording efficiency and simple and convenient operation.
Fig. 7 is a schematic diagram of a target area entry apparatus for a robot according to an embodiment of the present application, as shown in fig. 7, the apparatus includes a positioning module 710, a motion execution module 720, a recording module 730, and an output module 740.
The positioning module 710 is configured to receive a target area entry instruction and determine a current position of the robot in a preset map accordingly. The preset map may be a map obtained by the robot through laser radar sensing in advance, or may be a map input through an external program, for example, the robot may patrol a preset space determined by a user in response to a trigger of the user to automatically generate the map, and for example, in the traveling process, an SLAM technique may be employed to implement autonomous movement and positioning.
The motion execution module 710 is configured to receive a target area entry instruction, determine a current position of the robot in a preset map accordingly, and then move along a specific route based on an external drive and with the current position as an initial position.
The recording module 730 is configured to periodically sample points in the route and record pose data of the sampled points. The recording module 730 may be a chip or a data structure with storage and operation processing functions. Optionally, the recording module 730 may further include a determining module, configured to determine, based on a certain condition, a plurality of sampling points obtained by periodically sampling, so as to determine whether the recording module 730 needs to perform recording collection.
The output module 740 is configured to output a mark line formed by the sampling points in synchronization with the sampling and record the mark line according to the received recording confirmation instruction.
The target area inputting device for the robot provided by the embodiment can periodically sample points in a route and record and store pose data corresponding to the sampling points in the process that the robot moves along the specific route, and simultaneously can automatically connect a plurality of sampling points into line segments to form a marking line when the sampling points are output to a map synchronously with the sampling, so that the sampling points can be acquired in real time, and the marking line formed by the sampling points is synchronously uploaded to an application program of a mobile terminal through the output module 740.
Other aspects of the target area recording device for the robot proposed in this embodiment are the same as or similar to the target area recording method described above, and are not described again here.
Fig. 8a to 8f are application scene diagrams of a target area entry method according to an embodiment of the present application, and mainly relate to a user interaction interface on an APP used by a robot to record a target area, where as shown in fig. 8a to 8b, in a first step, a type of a target area to be recorded may be selected under a virtual wall editing menu bar in the user interaction interface, and for example, the target area types include: marking ramps, artwork editing, carpet area, deceleration strip area, show area, highlight area, elevator area, and blast area, etc. Secondly, selecting the type of the marking line to be recorded under a virtual wall graphic editing menu bar in the user interactive interface, such as: and thirdly, selecting the left side or the right side of the outer contour of the robot as a marking reference point to record the target area to be recorded. In the recording process, in the moving process of the robot along a specific route, periodically sampling points in the route, recording pose data of the sampling points, sequentially connecting the sampling points through an algorithm to form a marking line, then performing smooth filtering processing on the marking line and outputting the filtered marking line, as shown in fig. 8c, outputting an arc line formed by the marking line of a target area to be recorded through a bottom layer algorithm, as shown in fig. 8d, transmitting data of the marking line formed by sequentially connecting the sampling points in real time to an application program of a mobile terminal to display a moving path of the robot in the target area, and after the robot finishes moving on the specific route, clicking to finish recording, and simultaneously, automatically storing and recording the marking line based on software in the application program of the mobile terminal, i.e. the act of entering the target area in the constructed map is completed. When the closing condition of step S420 in the above embodiment is satisfied, that is, the difference between the pose data of the previous sampling point of one sampling point and the pose data of the initial point is greater than the second preset threshold and the difference between the pose data of the sampling point and the pose data of the initial point is less than the third threshold, the sampling point and the initial point are connected based on a predetermined rule, so that the marking line formed by the sampling points forms a closed loop, as shown in fig. 8e to 8f, and finally the marking line is automatically closed through a corresponding algorithm to form a closed area, so as to obtain the accurate marking of the closed target area on the map. The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Furthermore, the present application also provides a storage medium storing a computer program that can be loaded by a processor to perform the steps of any of the target area entry methods described above.
Illustratively, the storage medium may be any one of the following: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
In addition, the present application further provides a chip, fig. 9 is a schematic structural diagram of a chip provided in an embodiment of the present application, and as shown in fig. 9, the chip 900 includes one or more processors 901 and an interface circuit 902. Optionally, chip 900 may also contain bus 903. Wherein:
the processor 901 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 901. The processor 901 described above may be one or more of a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, an MCU, MPU, CPU, or co-processor. The methods, steps disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The interface circuit 902 may be used for sending or receiving data, instructions or information, and the processor 901 may perform processing by using the data, instructions or other information received by the interface circuit 902, and may send out processing completion information through the interface circuit 902.
Optionally, the chip also includes memory, which may include read-only memory and random access memory, and provides operating instructions and data to the processor. The portion of memory may also include non-volatile random access memory (NVRAM).
Alternatively, the memory stores executable software modules or data structures, and the processor may perform corresponding operations by calling the operation instructions stored in the memory (the operation instructions may be stored in an operating system).
Alternatively, the chip may be used in a target area entry device for a robot according to an embodiment of the present application. Alternatively, the interface circuit 902 may be used to output the execution result of the processor 901. Reference may be made to the foregoing embodiments regarding a target area entry method provided in one or more embodiments of the present application, and details are not repeated here.
It should be noted that the functions corresponding to the processor 901 and the interface circuit 902 may be implemented by hardware design, software design, or a combination of software and hardware, which is not limited herein.
In addition, the present application further provides a robot, fig. 10 is a schematic structural diagram of the robot provided in the embodiment of the present application, and as shown in fig. 10, arobot 1000 includes: a processor 1001 and a memory 1002; optionally, therobot 1000 further comprises a communication bus 1003 and a communication interface 1004, wherein the processor 1001, the communication interface 1004 and the memory 1002 can communicate with each other via the communication bus 1003, and the memory 1002 stores a computer program adapted to be loaded by the processor 1001 and to perform the steps of any of the above-described target area entry methods.
The communication bus 1003 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (enhanced Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For convenience of representation, the buses in the drawings disclosed in the embodiments of the present application are not limited to only one bus or one type of bus.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The target area entry method, device, storage medium, chip and robot provided by the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and implementation manner of the present application, and the description of the above embodiments is only used to help understand the technical scheme and core ideas of the present application; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.

Claims (13)

CN202110801460.4A2021-07-152021-07-15Target area inputting method and device, storage medium, chip and robotPendingCN113516715A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110801460.4ACN113516715A (en)2021-07-152021-07-15Target area inputting method and device, storage medium, chip and robot

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110801460.4ACN113516715A (en)2021-07-152021-07-15Target area inputting method and device, storage medium, chip and robot

Publications (1)

Publication NumberPublication Date
CN113516715Atrue CN113516715A (en)2021-10-19

Family

ID=78068264

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110801460.4APendingCN113516715A (en)2021-07-152021-07-15Target area inputting method and device, storage medium, chip and robot

Country Status (1)

CountryLink
CN (1)CN113516715A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115037766A (en)*2022-06-122022-09-09上海慧程工程技术服务有限公司Industrial equipment Internet of things data acquisition method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111024100A (en)*2019-12-202020-04-17深圳市优必选科技股份有限公司 A navigation map updating method, device, readable storage medium and robot
CN111443696A (en)*2018-12-282020-07-24珠海市一微半导体有限公司Laser floor sweeping robot path planning method, device and chip
CN111844072A (en)*2020-07-212020-10-30上海高仙自动化科技发展有限公司Automatic garbage dumping method and device for intelligent robot, intelligent robot and medium
CN112393736A (en)*2020-04-262021-02-23青岛慧拓智能机器有限公司Automatic updating system and method for strip mine map
CN113110445A (en)*2021-04-132021-07-13上海高仙自动化科技发展有限公司Robot path planning method and device, robot and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111443696A (en)*2018-12-282020-07-24珠海市一微半导体有限公司Laser floor sweeping robot path planning method, device and chip
CN111024100A (en)*2019-12-202020-04-17深圳市优必选科技股份有限公司 A navigation map updating method, device, readable storage medium and robot
CN112393736A (en)*2020-04-262021-02-23青岛慧拓智能机器有限公司Automatic updating system and method for strip mine map
CN111844072A (en)*2020-07-212020-10-30上海高仙自动化科技发展有限公司Automatic garbage dumping method and device for intelligent robot, intelligent robot and medium
CN113110445A (en)*2021-04-132021-07-13上海高仙自动化科技发展有限公司Robot path planning method and device, robot and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115037766A (en)*2022-06-122022-09-09上海慧程工程技术服务有限公司Industrial equipment Internet of things data acquisition method and device
CN115037766B (en)*2022-06-122023-09-22上海慧程工程技术服务有限公司Industrial equipment Internet of things data acquisition method and device

Similar Documents

PublicationPublication DateTitle
EP3985469A1 (en)Cleaning subarea planning method for robot walking along edge, chip and robot
CN113741438B (en)Path planning method, path planning device, storage medium, chip and robot
EP3974778B1 (en)Method and apparatus for updating working map of mobile robot, and storage medium
JP2022062716A (en) Vacuum cleaner control method and control system
JP6867120B2 (en) Cartography method and cartography device
WO2019144541A1 (en)Cleaning robot
CN113907663B (en) Obstacle map construction method, cleaning robot and storage medium
CN114115263B (en)Autonomous mapping method and device for AGV, mobile robot and medium
CN113974507B (en) Carpet detection method, device, cleaning robot and medium for cleaning robot
CN111694358A (en)Method and device for controlling transfer robot, and storage medium
CN108459596A (en)A kind of method in mobile electronic device and the mobile electronic device
CN108628318A (en)Congestion environment detection method and device, robot and storage medium
CN112438658A (en)Cleaning area dividing method for cleaning robot and cleaning robot
CN110716559A (en) A comprehensive control method for picking robots in shopping malls and supermarkets
CN113768419A (en)Method and device for determining sweeping direction of sweeper and sweeper
CN114842106A (en)Method and apparatus for constructing grid map, self-walking apparatus, and storage medium
CN114343507A (en)Map data generation method and device and sweeping robot
CN110850882A (en)Charging pile positioning method and device of sweeping robot
WO2024007807A1 (en)Error correction method and apparatus, and mobile device
JP2022187584A (en) Information processing device, information processing system, information processing method, and program
CN113516715A (en)Target area inputting method and device, storage medium, chip and robot
EP4390313A1 (en)Navigation method and self-propelled apparatus
CN115511939A (en)Obstacle detection method, obstacle detection device, storage medium, and electronic apparatus
CN115167449A (en) An obstacle detection method, device, readable storage medium and mobile robot
US12429352B2 (en)Method for generating working map, operation method, control method, and related apparatuses

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20211019

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp