A kind of implementation method, device and the electronic equipment of robot pickupTechnical field
The present invention relates to field of machine vision, particularly relates to a kind of implementation method, device and the electronics of robot pickup and setIt is standby.
Background technology
In recent years, machine vision is fast-developing, is just gradually developing into an essential part in automatic technology;In order toEfficiency is improved, saves time and human resources, the automation that technical staff develops many cargos picks up, sorts, is packaged as oneRobot of body, such as three bar parallel connection delta robots etc..
But the robot of the prior art is only capable of object, solid body or accumulation rule of the automatic Picking with larger planeObject, for hollow space object, shelly-shaped object and exist and wind possible object, even cannot effectively pick upIt takes.
Invention content
The technical problem to be solved in the present invention is to provide a kind of robot pickup implementation method, device and electronic equipment,It can realize that there is robot automatic Picking the object, shelly-shaped object and presence of hollow space to wind possible object.
In order to solve the above technical problems, the embodiment of the present invention offer technical solution is as follows:
On the one hand, a kind of implementation method of robot pickup is provided, including:
Receive original three-dimensional image, the original three-dimensional image be camera to accumulation have the containers of multiple objects to be picked up intoIt is obtained after row shooting;
Receive the shape data of the object to be picked up;
The position of target can be captured by being calculated according to the original three-dimensional image and the shape data in multiple objects to be picked upIt sets and posture;
The crawl information of target can be captured according to described in the position for capturing target and posture determination, and is grabbed describedBreath of winning the confidence is sent to robot, so that the robot can capture target according to the crawl information pickup.
Further, described calculated according to the original three-dimensional image and the shape data can in multiple objects to be picked upThe position and posture for capturing target include:
The original three-dimensional image is handled, the space three-dimensional point set of multiple objects to be picked up is obtained;
Each visual angle above container is traversed, 3-D view is generated according to each visual angle of the space three-dimensional point set pair, it is right3-D view is sliced, and target to be judged is identified on the image after slice according to the shape data;
Calculate each target to be judged can gripping, and by can gripping be more than first threshold target to be judged determinationFor target can be captured;
Identify each position for capturing target and posture.
Further, described that the original three-dimensional image is handled, obtain the space three-dimensional of multiple objects to be picked upPoint set includes:
Remove image data, the image data of container and the noise outside the container in the original three-dimensional image;
Original three-dimensional image after the image data, the image data of container and the noise that remove outside container is converted into spaceThree-dimensional point set.
Further, each visual angle above the traversal container, gives birth to according to each visual angle of the space three-dimensional point set pairAt 3-D view, 3-D view is sliced, target to be judged is identified on the image after slice according to the shape dataIncluding:
First angular field of view and visual angle cycle-index are set;
In each visual angle cycle stage, 3-D view is generated to each visual angle under the visual angle cycle stage, to graphicsAs being sliced, the target to be judged on the image after slice is identified;
Compare the target to be judged of all visual angle cycles stage acquisitions, the target to be judged that removal wherein repeats.
Further, described in each visual angle cycle stage, each visual angle under the visual angle cycle stage is generated three-dimensionalImage is sliced 3-D view, identifies that the target to be judged on the image after slice includes:
Visual angle starting point and the first visual angle step-length are set, and according to described in the visual angle starting point of setting and the first visual angle step-length traversalEach visual angle under first angular field of view;
The space three-dimensional point set is converted into 3-D view at each viewing angle;
From visual angle to being sliced to 3-D view according to preset space length on the range direction of object to be picked up;
The skeleton of object is identified on image after slice;
Whether the skeleton of judgment object meets the shape data of the object to be picked up;
If the skeleton of object meets the shape data of the object to be picked up, the reliability of object is calculated;
When the reliability of object is more than second threshold, the object is saved as into target to be judged.
Further, it is described calculate each target to be judged can gripping include:
Container data is added on the space three-dimensional point set and can wait picking up with what the target to be judged was woundThe data of object are taken, space three-dimensional point set is rebuild;
Under the corresponding visual angle of the target to be judged, the space three-dimensional point set of reconstruction is utilized to generate 3-D view;
The grabber of the robot is projected in the 3-D view;
According to the conflict point of projection, the quantity of adjacent domain conflict point and space abundant intensity, mesh to be judged described in calculatingTarget can gripping.
Further, each position for capturing target of the identification and posture include:
It determines and captures corresponding second angular field of view of target and the second visual angle step-length, second angular field of view with describedLess than first angular field of view, second visual angle step-length is less than first visual angle step-length;
Each visual angle under second angular field of view is traversed according to second visual angle step-length, at each viewing angle by instituteIt states space three-dimensional point set and is converted to 3-D view;
From visual angle to being sliced to the 3-D view according to preset space length on the range direction of object to be picked up;
The skeleton of target can be captured on image after slice described in identification;
Whether the skeleton that target can be captured described in judgement meets the shape data of the object to be picked up;
If when the shape data degree of conformity highest of the skeleton for capturing target and the object to be picked up, according to rightImage output after the slice answered can capture position and the posture of target.
Further, can be captured described in the basis target position and posture determine described in can capture target crawl letterBreath, and the crawl information is sent to robot and includes:
The six-degree-of-freedom information of target can be captured according to the position for capturing target and Attitude Calculation, and can be capturedThe six-degree-of-freedom information of target is sent to robot.
The embodiment of the present invention additionally provides a kind of realization device of robot pickup, including:
First receiving module, for receiving original three-dimensional image, the original three-dimensional image has accumulation for camera multipleThe container of object to be picked up obtains after being shot;
Second receiving module, the shape data for receiving the object to be picked up;
Processing module, can in multiple objects to be picked up for being calculated according to the original three-dimensional image and the shape dataCapture position and the posture of target;
Information computational module is captured, target can be captured described in the position and posture determination for target can be captured according toCrawl information, and the crawl information is sent to robot, so that the robot can according to the crawl information pickupCapture target.
Further, the processing module includes:
Image procossing submodule obtains the sky of multiple objects to be picked up for handling the original three-dimensional imageBetween three-dimensional point set;
Slicing treatment submodule, it is each according to the space three-dimensional point set pair for traversing each visual angle above containerVisual angle generates 3-D view, is sliced to 3-D view, is identified on the image after slice according to the shape data and wait sentencingDisconnected target;
Computational submodule, for calculate each target to be judged can gripping, and can gripping be more than first thresholdTarget to be judged be determined as that target can be captured;
Identify submodule, for identification each position for capturing target and posture.
Further, described image processing submodule includes:
Removal unit, for removing the image data outside the container in the original three-dimensional image, the image data of containerAnd noise;
Converting unit, for the initial three-dimensional after the image data outside container, the image data of container and noise will to be removedImage is converted into space three-dimensional point set.
Further, the slicing treatment submodule includes:
Setting unit, for the first angular field of view and visual angle cycle-index to be arranged;
Slicing treatment unit, in each visual angle cycle stage, being generated to each visual angle under the visual angle cycle stage3-D view is sliced 3-D view, identifies the target to be judged on the image after slice;
Screening unit, the target to be judged for comparing all visual angle cycles stage acquisitions, what removal wherein repeated waits sentencingDisconnected target.
Further, the slicing treatment unit includes:
Subelement is initialized, for setting visual angle starting point and the first visual angle step-length;
Subelement is traversed, for being traversed under first angular field of view according to the visual angle starting point and the first visual angle step-length of settingEach visual angle;
Conversion subunit, for the space three-dimensional point set to be converted to 3-D view at each viewing angle;
Be sliced subelement, for from visual angle on the range direction of object to be picked up according to preset space length to 3-D viewIt is sliced;
It identifies subelement, the skeleton of object is identified on the image after slice;
Whether judgment sub-unit, the skeleton for judgment object meet the shape data of the object to be picked up;
Computation subunit, if the skeleton for object meets the shape data of the object to be picked up, calculate object canBy property;
Saving subunit, for when the reliability of object is more than second threshold, the object to be saved as mesh to be judgedMark.
Further, the computational submodule includes:
Reconstruction unit, for adding container data on the space three-dimensional point set and capable of being sent out with the target to be judgedThe data of the object to be picked up of raw winding, rebuild space three-dimensional point set;
Generation unit, under the corresponding visual angle of the target to be judged, the space three-dimensional point set of reconstruction being utilized to generate3-D view;
Projecting cell, for the grabber of the robot to be projected in the 3-D view;
Computing unit, for according to the conflict point of projection, the quantity of adjacent domain conflict point and space abundant intensity, calculatingThe target to be judged can gripping.
Further, the identification submodule includes:
Determination unit captures corresponding second angular field of view of target and the second visual angle step-length, institute for determining with describedIt states the second angular field of view and is less than first angular field of view, second visual angle step-length is less than first visual angle step-length;
Traversal Unit, for traversing each visual angle under second angular field of view according to second visual angle step-length;
Converting unit, for the space three-dimensional point set to be converted to 3-D view at each viewing angle;
Be sliced unit, for from visual angle on the range direction of object to be picked up according to preset space length to the graphicsAs being sliced;
Recognition unit can capture the skeleton of target on the image after slice described in identification;
Judging unit, for judging whether the skeleton that can capture target meets the shape number of the object to be picked upAccording to;
Computing unit, if most for the shape data degree of conformity of the skeleton for capturing target and the object to be picked upGao Shi can then capture position and the posture of target according to the image output after corresponding slice.
Further, the crawl information computational module is specifically used for that position and the posture meter of target can be captured according toThe six-degree-of-freedom information of target can be captured by calculating, and the six-degree-of-freedom information that can capture target is sent to robot.
The embodiment of the present invention additionally provides a kind of electronic equipment for realizing robot pickup, including:
Processor;With
Memory is stored with computer program instructions in the memory,
Wherein, when the computer program instructions are run by the processor so that the processor executes following stepSuddenly:
Receive original three-dimensional image, the original three-dimensional image be camera to accumulation have the containers of multiple objects to be picked up intoIt is obtained after row shooting;
Receive the shape data of the object to be picked up;
The position of target can be captured by being calculated according to the original three-dimensional image and the shape data in multiple objects to be picked upIt sets and posture;
The crawl information of target can be captured according to described in the position for capturing target and posture determination, and is grabbed describedBreath of winning the confidence is sent to robot, so that the robot can capture target according to the crawl information pickup.
The embodiment of the present invention has the advantages that:
In said program, by the 3-D view of acquisition and the shape data of object to be picked up to object to be picked up unorderedThe estimation that position and posture are carried out under the scene of accumulation, can more accurately estimate position and the posture of object to be picked up, toRealize that there is robot automatic Picking the object, shelly-shaped object and presence of hollow space to wind possible object.
Description of the drawings
Fig. 1 is the flow diagram of the implementation method of robot of embodiment of the present invention pickup;
Fig. 2 is the flow signal that the embodiment of the present invention calculates position and posture that target can be captured in multiple objects to be picked upFigure;
Fig. 3 is the flow diagram for the space three-dimensional point set that the embodiment of the present invention obtains multiple objects to be picked up;
Fig. 4 is the embodiment of the present invention identifies target to be judged flow diagram on the image after slice;
Fig. 5 is the flow diagram of the target to be judged on the image after the embodiment of the present invention identifies slice;
Fig. 6 be the embodiment of the present invention calculate each target to be judged can gripping flow diagram;
Fig. 7 is the flow diagram that the embodiment of the present invention identifies each position and posture for capturing target;
Fig. 8 is the structure diagram of the realization device of robot of embodiment of the present invention pickup;
Fig. 9 is the structure diagram of processing module of the embodiment of the present invention;
Figure 10 is the structure diagram of image procossing submodule of the embodiment of the present invention;
Figure 11 is the structure diagram of slicing treatment submodule of the embodiment of the present invention;
Figure 12 is the structure diagram of slicing treatment unit of the embodiment of the present invention;
Figure 13 is the structure diagram of computational submodule of the embodiment of the present invention;
Figure 14 is the structure diagram that the embodiment of the present invention identifies submodule;
Figure 15 is the structure diagram for the electronic equipment that the embodiment of the present invention realizes robot pickup;
Figure 16 is the flow diagram of the implementation method of specific embodiment of the invention robot pickup;
Figure 17 is the schematic diagram of application scenarios of the embodiment of the present invention;
Figure 18 and Figure 19 is the schematic diagram at visual angle of the embodiment of the present invention;
Figure 20 and Figure 21 is the schematic diagram of the grabber of robot end of the embodiment of the present invention;
Figure 22 is that the embodiment of the present invention captures the schematic diagram not wound;
Figure 23 is that the embodiment of the present invention captures the schematic diagram wound;
Figure 24 and Figure 25 be the embodiment of the present invention build the partial data of object to be picked up with calculate can gripping signalFigure.
Specific implementation mode
To keep the embodiment of the present invention technical problems to be solved, technical solution and advantage clearer, below in conjunction withDrawings and the specific embodiments are described in detail.
The embodiment of the present invention provides a kind of implementation method, device and the electronic equipment of robot pickup, can realize machineThere is people's automatic Picking the object, shelly-shaped object and presence of hollow space to wind possible object.
Embodiment one
The present embodiment provides a kind of implementation methods of robot pickup, as shown in Figure 1, including:
Step 101:Original three-dimensional image is received, the original three-dimensional image is camera there are multiple objects to be picked up to accumulationContainer shot after obtain;
Step 102:Receive the shape data of the object to be picked up;
Step 103:It calculates in multiple objects to be picked up and can capture according to the original three-dimensional image and the shape dataThe position of target and posture;
Step 104:According to the crawl information that can capture target described in the position for capturing target and posture determination, andThe crawl information is sent to robot, so that the robot can capture target according to the crawl information pickup.
The present embodiment is by the 3-D view of acquisition and the shape data of object to be picked up to object to be picked up in unordered heapThe estimation that position and posture are carried out under long-pending scene, can more accurately estimate position and the posture of object to be picked up, to realShowing robot automatic Picking, there is the object, shelly-shaped object and presence of hollow space to wind possible object.
Wherein, the position that can capture target refers to that can capture the specific location of target in a reservoir, can specifically be shown asThe three-dimensional coordinate of target in a reservoir can be captured;The posture that target can be captured includes that can capture the direction of target, for example can captureTarget is annulus, then can capture the posture of target and can show as annulus place plane and x-axis, y-axis and z-axis in space coordinatesResiding angle.
Robot can be completed after knowing the crawl information that can capture target and can capture the pickup of target and specifically grabBreath of winning the confidence can be according to the six-degree-of-freedom information for capturing target that can capture the position of target and posture obtains, and object is in skyBetween have six-freedom degree, i.e., along the one-movement-freedom-degree of three rectangular co-ordinate axis directions of x, y, z and around these three reference axis turnDynamic degree of freedom, object can be accurately captured according to the six-degree-of-freedom information robot of object.
As an example, as shown in Fig. 2, the step 103 includes:
Step 1031:The original three-dimensional image is handled, the space three-dimensional point set of multiple objects to be picked up is obtained;
Step 1032:Each visual angle above container is traversed, three are generated according to each visual angle of the space three-dimensional point set pairImage is tieed up, 3-D view is sliced, target to be judged is identified on the image after slice according to the shape data;
Step 1033:Calculate each target to be judged can gripping, and by can gripping be more than first threshold and wait sentencingDisconnected target is determined as that target can be captured;
Step 1034:Identify each position for capturing target and posture.
As an example, as shown in figure 3, step 1031 includes:
Step 10311:It removes the image data outside the container in the original three-dimensional image, the image data of container and makes an uproarSound;
Step 10312:By the original three-dimensional image after image data, the image data of container and the noise outside removal containerIt is converted into space three-dimensional point set.
As an example, as shown in figure 4, step 1032 includes:
Step 10321:First angular field of view and visual angle cycle-index are set;
Step 10322:In each visual angle cycle stage, graphics is generated to each visual angle under the visual angle cycle stagePicture is sliced 3-D view, identifies the target to be judged on the image after slice;
Step 10323:Compare the target to be judged of all visual angle cycles stage acquisitions, the mesh to be judged that removal wherein repeatsMark.
As an example, as shown in figure 5, step 10322 includes:
Step 103221:Visual angle starting point and the first visual angle step-length are set, and according to the visual angle starting point of setting and the first visual angleStep-length traverses each visual angle under first angular field of view;
Step 103222:The space three-dimensional point set is converted into 3-D view at each viewing angle;
Step 103223:3-D view is carried out according to preset space length from visual angle on the range direction of object to be picked upSlice;
Step 103224:The skeleton of object is identified on image after slice;
Step 103225:Whether the skeleton of judgment object meets the shape data of the object to be picked up;
Step 103226:If the skeleton of object meets the shape data of the object to be picked up, the reliability of object is calculated;
Step 103227:When the reliability of object is more than second threshold, the object is saved as into target to be judged.
As an example, as shown in fig. 6, step 1033 includes:
Step 10331:Container data is added on the space three-dimensional point set and can be occurred with the target to be judgedThe data of the object to be picked up of winding rebuild space three-dimensional point set;
Step 10332:Under the corresponding visual angle of the target to be judged, the space three-dimensional point set of reconstruction is utilized to generate three-dimensionalImage;
Step 10333:The grabber of the robot is projected in the 3-D view;
Step 10334:According to the conflict point of projection, the quantity of adjacent domain conflict point and space abundant intensity, institute is calculatedThat states target to be judged can gripping.
As an example, as shown in fig. 7, step 1034 includes:
Step 10341:It determines and captures corresponding second angular field of view of target and the second visual angle step-length with described, described theTwo angulars field of view are less than first angular field of view, and second visual angle step-length is less than first visual angle step-length;
Step 10342:Each visual angle under second angular field of view is traversed according to second visual angle step-length, eachThe space three-dimensional point set is converted into 3-D view under visual angle;
Step 10343:From visual angle on the range direction of object to be picked up according to preset space length to the 3-D viewIt is sliced;
Step 10344:The skeleton of target can be captured on image after slice described in identification;
Step 10345:Whether the skeleton that target can be captured described in judgement meets the shape data of the object to be picked up;
Step 10346:If the shape data degree of conformity highest of the skeleton for capturing target and the object to be picked upWhen, then position and the posture of target can be captured according to the image output after corresponding slice.
Further, can be captured described in the basis target position and posture determine described in can capture target crawl letterBreath, and the crawl information is sent to robot and includes:
The six-degree-of-freedom information of target can be captured according to the position for capturing target and Attitude Calculation, and can be capturedThe six-degree-of-freedom information of target is sent to robot.
Embodiment two
A kind of realization device 20 of robot pickup is present embodiments provided, as shown in figure 8, the present embodiment includes:
First receiving module 21, for receiving original three-dimensional image, the original three-dimensional image has accumulation for camera moreThe container of a object to be picked up obtains after being shot;
Second receiving module 22, the shape data for receiving the object to be picked up;
Processing module 23, for being calculated in multiple objects to be picked up according to the original three-dimensional image and the shape dataPosition and the posture of target can be captured;
Information computational module 24 is captured, mesh can be captured described in the position and posture determination for target can be captured according toTarget captures information, and the crawl information is sent to robot, so that the robot is according to the crawl information pickupTarget can be captured.
The present embodiment is by the 3-D view of acquisition and the shape data of object to be picked up to object to be picked up in unordered heapThe estimation that position and posture are carried out under long-pending scene, can more accurately estimate position and the posture of object to be picked up, to realShowing robot automatic Picking, there is the object, shelly-shaped object and presence of hollow space to wind possible object.
As an example, as shown in figure 9, the processing module 23 includes:
Image procossing submodule 231 obtains multiple objects to be picked up for handling the original three-dimensional imageSpace three-dimensional point set;
Slicing treatment submodule 232, it is every according to the space three-dimensional point set pair for traversing each visual angle above containerOne visual angle generates 3-D view, is sliced to 3-D view, is identified and waited on the image after slice according to the shape dataJudge target;
Computational submodule 233, for calculate each target to be judged can gripping, and can gripping be more than the first thresholdThe target to be judged of value is determined as that target can be captured;
Identify submodule 234, for identification each position for capturing target and posture.
As an example, as shown in Figure 10, image procossing submodule 231 includes:
Removal unit 2311, the image for removing the image data outside the container in the original three-dimensional image, containerData and noise;
Converting unit 2312 is used to remove original after the image data outside container, the image data of container and noise3-D view is converted into space three-dimensional point set.
As an example, as shown in figure 11, the slicing treatment submodule 232 includes:
Setting unit 2321, for the first angular field of view and visual angle cycle-index to be arranged;
Slicing treatment unit 2322 is used in each visual angle cycle stage, to each visual angle under the visual angle cycle stage3-D view is generated, 3-D view is sliced, identifies the target to be judged on the image after slice;
Screening unit 2323, the target to be judged for comparing all visual angle cycles stage acquisitions, what removal wherein repeatedTarget to be judged.
As an example, as shown in figure 12, the slicing treatment unit 2322 includes:
Subelement 23221 is initialized, for setting visual angle starting point and the first visual angle step-length;
Subelement 23222 is traversed, for traversing first visual angle according to the visual angle starting point and the first visual angle step-length of settingEach visual angle under range;
Conversion subunit 23223, for the space three-dimensional point set to be converted to 3-D view at each viewing angle;
Be sliced subelement 23224, for from visual angle on the range direction of object to be picked up according to preset space length pair threeDimension image is sliced;
It identifies subelement 23225, the skeleton of object is identified on the image after slice;
Whether judgment sub-unit 23226, the skeleton for judgment object meet the shape data of the object to be picked up;
Computation subunit 23227 calculates object if the skeleton for object meets the shape data of the object to be picked upReliability;
Saving subunit 23228, for when the reliability of object being more than second threshold, the object being saved as and waits sentencingDisconnected target.
As an example, as shown in figure 13, the computational submodule 233 includes:
Reconstruction unit 2331, for adding container data and can be with the mesh to be judged on the space three-dimensional point setThe data of the object to be picked up wound are marked, space three-dimensional point set is rebuild;
Generation unit 2332, under the corresponding visual angle of the target to be judged, utilizing the space three-dimensional point set of reconstructionGenerate 3-D view;
Projecting cell 2333, for the grabber of the robot to be projected in the 3-D view;
Computing unit 2334 is used for according to the conflict point of projection, the quantity of adjacent domain conflict point and space abundant intensity,Target to be judged described in calculating can gripping.
As an example, as shown in figure 14, the identification submodule 234 includes:
Determination unit 2341 captures corresponding second angular field of view of target and the second visual angle step for determining with describedLong, second angular field of view is less than first angular field of view, and second visual angle step-length is less than first visual angle step-length;
Traversal Unit 2342 each is regarded for traverse under second angular field of view according to second visual angle step-lengthAngle;
Converting unit 2343, for the space three-dimensional point set to be converted to 3-D view at each viewing angle;
Be sliced unit 2344, for from visual angle on the range direction of object to be picked up according to preset space length to described threeDimension image is sliced;
Recognition unit 2345 can capture the skeleton of target on the image after slice described in identification;
Judging unit 2346, for judging whether the skeleton that can capture target meets the shape of the object to be picked upData;
Computing unit 2347, if the shape data for the skeleton for capturing target and the object to be picked up meetsWhen spending highest, then position and the posture of target can be captured according to the image output after corresponding slice.
Further, the crawl information computational module 24 is specifically used for that position and the posture of target can be captured according toThe six-degree-of-freedom information of target can be captured by calculating, and the six-degree-of-freedom information that can capture target is sent to robot.
Embodiment three
A kind of electronic equipment 30 for realizing robot pickup is present embodiments provided, as shown in figure 15, the present embodiment includes:
Processor 32;With
Memory 34 is stored with computer program instructions in the memory 34,
Wherein, when the computer program instructions are run by the processor so that the processor 32 executes followingStep:
Receive original three-dimensional image, the original three-dimensional image be camera to accumulation have the containers of multiple objects to be picked up intoIt is obtained after row shooting;
Receive the shape data of the object to be picked up;
The position of target can be captured by being calculated according to the original three-dimensional image and the shape data in multiple objects to be picked upIt sets and posture;
The crawl information of target can be captured according to described in the position for capturing target and posture determination, and is grabbed describedBreath of winning the confidence is sent to robot, so that the robot can capture target according to the crawl information pickup.
Further, as shown in figure 15, the electronic equipment for handling panoramic picture further includes network interface 31, input equipment33, hard disk 35 and display equipment 36.
It can be interconnected by bus architecture between above-mentioned each interface and equipment.Bus architecture can be may include arbitraryThe bus and bridge of the interconnection of quantity.One or more central processing unit (CPU) specifically represented by processor 32, and by depositingThe various of one or more memory that reservoir 34 represents are electrically connected to together.Bus architecture can also such as will be set peripheryThe various other of standby, voltage-stablizer and management circuit or the like are electrically connected to together.It is appreciated that bus architecture is for realConnection communication between these existing components.Bus architecture in addition to including data/address bus, further include power bus, controlling bus andStatus signal bus in addition, these are all it is known in the art, therefore is no longer described in greater detail herein.
The network interface 31 can be connected to network (such as internet, LAN), dependency number is obtained from networkAccording to, for example, original three-dimensional image, object to be picked up shape data, and can be stored in hard disk 35.
The input equipment 33, can receive the various instructions of operating personnel's input, and be sent to processor 32 for holdingRow.The input equipment 33 may include keyboard or pointing device (for example, mouse, trace ball (trackball), touch-sensitive plateOr touch screen etc..
The display equipment 36, the result that processor 32 can be executed instruction to acquisition are shown.
The memory 34 is calculated for program and data and processor 32 necessary to storage program area operationThe data such as intermediate result in the process.
It is appreciated that the memory 34 in the embodiment of the present invention can be volatile memory or nonvolatile memory,Both or may include volatile and non-volatile memory.Wherein, nonvolatile memory can be read-only memory (ROM),Programmable read only memory (PROM), Erasable Programmable Read Only Memory EPROM (EPROM), electrically erasable programmable read-only memory(EEPROM) or flash memory.Volatile memory can be random access memory (RAM), be used as External Cache.HereinThe memory 34 of the device and method of description is intended to the memory of including but not limited to these and any other suitable type.
In some embodiments, memory 34 stores following element, executable modules or data structures, orTheir subset or their superset:Operating system 341 and application program 342.
Wherein, operating system 341, including various system programs, such as ccf layer, core library layer, driving layer etc., for realExisting various basic businesses and the hardware based task of processing.Application program 342, including various application programs, such as browser(Browser) etc., for realizing various applied business.Realize that the program of present invention method may be embodied in application programIn 342.
Above-mentioned processor 32, when calling and execute the application program and data that are stored in the memory 34, specifically,When can be the program stored in application program 342 or instruction, original three-dimensional image can be received, the original three-dimensional image isCamera obtains after having the container of multiple objects to be picked up to shoot accumulation;Receive the shape data of the object to be picked up;Position and the posture of target can be captured by being calculated according to the original three-dimensional image and the shape data in multiple objects to be picked up;According to the position for capturing target and posture determine described in can capture the crawl information of target, and by crawl information hairRobot is given, so that the robot can capture target according to the crawl information pickup.
The method that the above embodiment of the present invention discloses can be applied in processor 32, or be realized by processor 32.PlaceIt may be a kind of IC chip to manage device 32, the processing capacity with signal.During realization, each step of the above methodIt can be completed by the integrated logic circuit of the hardware in processor 32 or the instruction of software form.Above-mentioned processor 32 canTo be general processor, digital signal processor (DSP), application-specific integrated circuit (ASIC), ready-made programmable gate array (FPGA)Either either transistor logic, discrete hardware components may be implemented or execute for other programmable logic device, discrete gateDisclosed each method, step and logic diagram in the embodiment of the present invention.General processor can be microprocessor or this atIt can also be any conventional processor etc. to manage device.The step of method in conjunction with disclosed in the embodiment of the present invention, can directly embodyExecute completion for hardware decoding processor, or in decoding processor hardware and software module combination execute completion.SoftwareModule can be located at random access memory, flash memory, read-only memory, programmable read only memory or electrically erasable programmable storageIn the storage medium of this fields such as device, register maturation.The storage medium is located at memory 34, and processor 32 reads memory 34In information, in conjunction with its hardware complete the above method the step of.
It is understood that embodiments described herein can use hardware, software, firmware, middleware, microcode or itsIt combines to realize.For hardware realization, processing unit may be implemented in one or more application-specific integrated circuits (ASIC), number letterNumber processor DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array(FPGA), general processor, controller, microcontroller, microprocessor, other electronics lists for executing herein described functionIn member or combinations thereof.
For software implementations, it can be realized herein by executing the module (such as process, function etc.) of function described hereinThe technology.Software code is storable in memory and is executed by processor.Memory can in the processor orIt is realized outside processor.
Specifically, processor 32 handles the original three-dimensional image, obtains the space three of multiple objects to be picked upTie up point set;Each visual angle above container is traversed, 3-D view is generated according to each visual angle of the space three-dimensional point set pair, to threeDimension image is sliced, and target to be judged is identified on the image after slice according to the shape data;It calculates and each waits judgingTarget can gripping, and by can gripping be more than first threshold target to be judged be determined as that target can be captured;It identifies eachPosition and the posture of target can be captured.
Specifically, processor 32 removes image data, the picture number of container outside the container in the original three-dimensional imageAccording to and noise;Original three-dimensional image after the image data, the image data of container and the noise that remove outside container is converted into skyBetween three-dimensional point set.
Specifically, the first angular field of view and visual angle cycle-index is arranged in processor 32;In each visual angle cycle stage, to thisEach visual angle under the cycle stage of visual angle generates 3-D view, is sliced to 3-D view, identifies on the image after sliceTarget to be judged;Compare the target to be judged of all visual angle cycles stage acquisitions, the target to be judged that removal wherein repeats.
Specifically, processor 32 sets visual angle starting point and the first visual angle step-length, and according to the visual angle starting point of setting and firstVisual angle step-length traverses each visual angle under first angular field of view;The space three-dimensional point set is converted at each viewing angle3-D view;From visual angle to being sliced to 3-D view according to preset space length on the range direction of object to be picked up;It is cuttingThe skeleton of object is identified on image after piece;Whether the skeleton of judgment object meets the shape data of the object to be picked up;IfThe skeleton of object meets the shape data of the object to be picked up, calculates the reliability of object;It is more than the in the reliability of objectWhen two threshold values, the object is saved as into target to be judged.
Specifically, processor 32 adds container data on the space three-dimensional point set and can be with the target to be judgedThe data of the object to be picked up wound rebuild space three-dimensional point set;Under the corresponding visual angle of the target to be judged, utilizeThe space three-dimensional point set of reconstruction generates 3-D view;The grabber of the robot is projected in the 3-D view;According toThe conflict point of projection, the quantity of adjacent domain conflict point and space abundant intensity, target to be judged described in calculating can gripping.
Specifically, processor 32 is determined captures corresponding second angular field of view of target and the second visual angle step-length with described,Second angular field of view is less than first angular field of view, and second visual angle step-length is less than first visual angle step-length;RootEach visual angle under second angular field of view is traversed according to second visual angle step-length, at each viewing angle by the space three-dimensionalPoint set is converted to 3-D view;From visual angle on the range direction of object to be picked up according to preset space length to the 3-D viewIt is sliced;The skeleton of target can be captured on image after slice described in identification;The skeleton of target can be captured described in judgement isThe no shape data for meeting the object to be picked up;If the shape number of the skeleton for capturing target and the object to be picked upWhen according to degree of conformity highest, then position and the posture of target can be captured according to the image output after corresponding slice.
Specifically, processor 32 can capture the six degree of freedom of target according to the position for capturing target and Attitude CalculationInformation, and the six-degree-of-freedom information that can capture target is sent to robot.
In said program, by the 3-D view of acquisition and the shape data of object to be picked up to object to be picked up unorderedThe estimation that position and posture are carried out under the scene of accumulation, can more accurately estimate position and the posture of object to be picked up, toRealize that there is robot automatic Picking the object, shelly-shaped object and presence of hollow space to wind possible object.
Example IV
There is extensive demand in robot to the sorting for arbitrarily accumulating object in manufacturing industry and other field, and machine regardsThe prevalence of feel provides optimized integration thus.The implementation method of the robot pickup of the present embodiment, it is only necessary to know the part of objectCharacteristic feature shape, for example be exactly round for annulus, so that it may with found in the object arbitrarily accumulated can capture target andTheir position and posture, are sent to robot, so that robot can capture target according to crawl information pickup.
As shown in figure 17, accumulation has the objects 5 of multiple same sizes in container 4, and in the present embodiment, object 5 is with annulusFor.Camera 3 is capable of the 3-D view of collection container 4, according to the 3-D view that camera 3 acquires, the technical solution of the present embodimentIt can identify in container 4 and capture target, target is captured so that robot 1 is captured with grabber 2 in container 4.
As shown in figure 16, the implementation method of the robot pickup of the present embodiment specifically includes following steps:
Step 401:Receive the original three-dimensional image of camera shooting;
The original three-dimensional image is to be obtained after camera has the container of multiple objects to be picked up to shoot accumulation, includesRange information.
Step 402:Receive the shape data of object to be picked up;
The shape data of object to be picked up includes the size and shape of object to be picked up, according to the shape number of object to be picked upAccording to can identify object to be picked up from three-dimensional or two dimensional image.
Step 403:Original three-dimensional image is handled, the space three-dimensional point set of multiple objects to be picked up is obtained;
The step is to remove the data unrelated with object to be picked up from original three-dimensional image, and will be on 3-D viewPoint is converted to three dimensions point sequence.
The three-dimensional data of the working face of support container can be identified and removed first, it can be with planar hull modelling method come realIt is existing;The three-dimensional data of container part is identified and removed later, can identify the three of container by known container threedimensional modelDimension data, or by the three-dimensional data of container registration step acquisition container, the noise in 3-D view is also removed later, mostThe point on remaining 3-D view is converted into space three-dimensional point set afterwards, i.e., it is clean only with the relevant space of object to be picked up threeTie up point set.
Step 404:Each visual angle above container is traversed, graphics is generated according to each visual angle of space three-dimensional point set pairPicture is sliced 3-D view, and target to be judged is identified on the image after slice according to the shape data;
As shown in figure 18, it is assumed that there are one hemispherical grids to be covered in above container, and the centre of sphere is image center line and containerThe intersection point of place working face.Visual angle is the arbitrary point to the line direction of the centre of sphere from spherical surface.Visual angle can be by spherical surface as a result,The latitude α and longitude θ of upper point determine that range is 0≤α≤90 and -180 respectively<θ≤180.
As shown in figure 19, it is identifying when judging target, is needing that the first angular field of view and visual angle cycle-index is first arranged,Multiple visual angles are searched for respectively under each visual angle cycle stage, and 3-D view is generated according to each visual angle of space three-dimensional point set pair, it is right3-D view is sliced, and target to be judged is identified on the image after slice according to the shape data of object to be picked up.Such as3-D view is generated to visual angle (0,0), 3-D view is sliced, according to the shape data of object to be picked up after sliceTarget to be judged is identified on image;Or 3-D view is generated to visual angle (10,20), 3-D view is sliced, according to waiting picking upThe shape data of object is taken to identify target to be judged on the image after slice;Or 3-D view is generated to visual angle (20,30), it is right3-D view is sliced, and target to be judged is identified on the image after slice according to the shape data of object to be picked up;Or it is rightVisual angle (30,60) generates 3-D view, is sliced to 3-D view, according to the shape data of object to be picked up after sliceTarget to be judged is identified on image;Or 3-D view is generated to visual angle (40,110), 3-D view is sliced, according to waiting picking upThe shape data of object is taken to identify target to be judged on the image after slice.
The purpose for carrying out multiple visual angle cycle is to obtain enough targets to be judged.If terminated in one cycleBut have not been obtained enough when judging target, it will open next cycle.Or certain time can be set, setting whenInterior to carry out multiple visual angle cycle, when reaching the time of setting, visual angle cycle terminates.
Specifically, in each visual angle cycle stage, visual angle starting point and the first visual angle step-length are set, and according to the visual angle of settingStarting point and the first visual angle step-length traverse each visual angle under the first angular field of view, wherein the first visual angle step-length includes longitude and latitudeThe sampling step length of the sampling step length of degree, longitude is equal to the corresponding longitude range of the first angular field of view divided by samples the quantity at visual angle,The sampling step length of latitude is equal to the corresponding latitude scope of the first angular field of view divided by samples the quantity at visual angle.
Space three-dimensional point set is converted into the 3-D view under the visual angle at each viewing angle, from visual angle to object to be picked upThe 3-D view is sliced according to preset space length on the range direction of body, wherein preset space length can be set as neededIt is fixed;In image after slice find and identification feature shape, if slice after image in there are one or multiple shapes exist,Then there are one or more objects in explanation under the visual angle, and the skeleton of object is identified on the image after slice, judgment objectWhether skeleton meets the shape data of object to be picked up, if the skeleton of object meets the shape data of object to be picked up, it is also necessary toThe reliability for calculating object, verifies the object whether shape point identified is implicitly present in, because having under some cases, it may be possible toThe partial shape point of multiple objects integrally constitutes the skeleton of an object, and the object identified in this case is apparently notOne object being implicitly present in.In verification, the clean and tidy degree and integrated degree that consider shape point set are needed, and thus calculateTo reliability existing for object, if the reliability of object is more than preset second threshold, the object that will identify that is stored inIn one intermediate result list.
After the multiple visual angle cycle stage, multiple intermediate result lists are obtained, it will be in multiple intermediate result listsAs a result it is compared, it is that removal wherein repeats as a result, and choosing the wherein maximum object of reliability as target to be judged.
The present embodiment is observed 3-D view by various visual angles and to 3-D view slicing delamination, is known on the image after sliceThe method of other object features shape can more accurately estimate the posture of object.
Step 405:Calculate each target to be judged can gripping, and by can gripping be more than first threshold and wait judgingTarget is determined as that target can be captured;
The grabber of robot end may include two types as shown in Figure 20 and Figure 21.Refer to for two shown in Figure 20Type grabber picks up object, such as annular object by the folding of two fingers;It is cyclic annular grabber shown in Figure 21, passes throughIt shrinks and struts and supported from centre by annular object pickup, compare two finger-type grabbers, cyclic annular grabber can be to waiting forPickup object imposes soft uniform strength, those more suitable careful objects gently of needs.
Calculate can gripping when, need the three-dimensional data for restoring container again, if not restoring the three-dimensional data of container,Then robot is capturing when judging target, and may colliding by container blocking or with container, it is unsuccessful to cause to capture.SpecificallyGround, can rebuild the three-dimensional data of container according to container model, these data can be used for calculating capturing for target to be judgedProperty.
In addition, winding in order to prevent, the partial 3-D data of object to be picked up can also be built.Such as Figure 22 and Figure 23 institutesShowing, object A is located at below object B, will not be wound for the position of crawl, the then crawl of Figure 22 in dotted line frame, and Figure 23Crawl will be wound.In order to avoid winding, as shown in figures 24 and 25, detected all above target to be judgedObject hollow area be all filled with it is solid build the three-dimensional data of object, in this way grabbed in calculated crawl positionTaking will not wind.
To sum up, space three-dimensional point set is rebuild plus the data of container data and the object of structure in space three-dimensional point set, andUnder the corresponding visual angle of target to be judged, 3-D view is generated using the space three-dimensional point set of reconstruction, by the grabber of robotIt is projected in the 3-D view of production, according to the conflict point of projection, the quantity of adjacent domain conflict point and space abundant intensity, meterThat calculates target to be judged can gripping.
Specifically, can calculate target to be judged using following formula can gripping G, wherein G between 0 and 1 itBetween:
G=1-g1*collisionPoints+g2*marginPoints;
Wherein, g1 and g2 is the coefficient of setting, and collisionPoints is conflict point and the adjacent domain conflict of projectionThe quantity of point, marginPoints are the quantity of the point of white space near projection.
It is calculated can gripping be more than first threshold when, target to be judged can be determined as that target can be captured
Step 406:Identify each position for capturing target and posture.
In above-mentioned steps, the precision at the visual angle of traversal is poor, merely to roughly determination can capture target.Determination canAfter capturing target, before pickup can capture target, need to obtain position and the posture that can more accurately capture target.SpecificallyGround determines the second angular field of view corresponding with that can capture target and the second visual angle step-length, wherein the second angular field of view is less than firstAngular field of view, the second visual angle step-length are less than the first visual angle step-length, are traversed according to the second visual angle step-length every under the second angular field of viewOne visual angle can be found and can capture target accurately corresponding visual angle, and obtain under the visual angle position that can capture target andPosture.
Specifically, space three-dimensional point set is converted into 3-D view at each viewing angle, from visual angle to object to be picked upRange direction on 3-D view is sliced according to preset space length, on the image after slice identification can capture the bone of targetFrame, judges whether the skeleton that can capture target meets the shape data of object to be picked up, the skeleton that can capture target with wait picking upWhen taking the shape data degree of conformity highest of object, obtain to capture position and the appearance of target according to the image after corresponding sliceState.
Basis, which can capture the position of target and Attitude Calculation, later can capture the six-degree-of-freedom information of target, and can captureThe six-degree-of-freedom information of target is sent to robot, and robot can accurately pickup can capture according to the six-degree-of-freedom information receivedTarget.
The present embodiment based on various visual angles transformation 3-D view to identify position and the posture of object, can accurately outputThe six-degree-of-freedom information of body, the crawl object that can not only help robot preferably more suitable, but also can solve the problems, such as winding,It can realize that there is robot automatic Picking the object, shelly-shaped object and presence of hollow space to wind possible object.
The above is the preferred embodiment of the present invention, it is noted that for those skilled in the artFor, without departing from the principles of the present invention, it can also make several improvements and retouch, these improvements and modificationsIt should be regarded as protection scope of the present invention.