Movatterモバイル変換


[0]ホーム

URL:


CN110142767A - A kind of clamping jaw control method of integrated vision system, device and clamping jaw control equipment - Google Patents

A kind of clamping jaw control method of integrated vision system, device and clamping jaw control equipment
Download PDF

Info

Publication number
CN110142767A
CN110142767ACN201910532299.8ACN201910532299ACN110142767ACN 110142767 ACN110142767 ACN 110142767ACN 201910532299 ACN201910532299 ACN 201910532299ACN 110142767 ACN110142767 ACN 110142767A
Authority
CN
China
Prior art keywords
clamping jaw
vision
control
vision camera
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910532299.8A
Other languages
Chinese (zh)
Other versions
CN110142767B (en
Inventor
韩凤磷
刘福东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sreier (suzhou) Intelligent Technology Co Ltd
Original Assignee
Sreier (suzhou) Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sreier (suzhou) Intelligent Technology Co LtdfiledCriticalSreier (suzhou) Intelligent Technology Co Ltd
Priority to CN201910532299.8ApriorityCriticalpatent/CN110142767B/en
Publication of CN110142767ApublicationCriticalpatent/CN110142767A/en
Application grantedgrantedCritical
Publication of CN110142767BpublicationCriticalpatent/CN110142767B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The embodiment of the invention provides a kind of clamping jaw control methods of integrated vision system, device and clamping jaw control equipment, pass through the integrally disposed vision camera of retained part in clamping jaw, and with the design of multiple detection sensors in the stroke for passing through clamping jaw, by combining the control information of clamping jaw and camera accurately to detect and control clamping jaw in any position of total travel in clamping jaw control process, and pass through the integrally disposed design with the vision camera in clamping jaw, when in use can auto-manual control vision camera to adjust the correlation between vision camera and clamping jaw, in terms of greatly improving debugging efficiency and adjustment accuracy, reduce cost of labor and time cost.

Description

A kind of clamping jaw control method of integrated vision system, device and clamping jaw control equipment
Technical field
The present invention relates to clamping jaw control fields, clamping jaw control method, dress in particular to a kind of integrated vision systemIt sets and clamping jaw controls equipment.
Background technique
Existing product is typically only capable to the position of detection two points of clamping jaw beginning and end, and can not detect its in total travelThe position of his arbitrary point can not control at an arbitrary position clamping jaw;In addition, traditional clamping jaw needs to adjust manually when in useThe correlation between vision system and clamping jaw, including hardware location and software are tried, is had in debugging efficiency and precision aspect manyIt is inconvenient.
Summary of the invention
In view of this, a kind of clamping jaw control method for being designed to provide integrated vision system of the embodiment of the present invention, dressIt sets and clamping jaw controls equipment, clamping jaw may be implemented and accurately detected and controlled in any position of total travel, and set by integratedSet the design with the vision camera in clamping jaw, when in use can auto-manual control vision camera with adjust vision camera and folderCorrelation between pawl in terms of greatly improving debugging efficiency and adjustment accuracy, reduces cost of labor and time cost.
According to an aspect of an embodiment of the present invention, a kind of clamping jaw control method of integrated vision system is provided, is applied toClamping jaw controls equipment, and the clamping jaw control equipment is used to control the clamping jaw in stroke, and the retained part of the clamping jaw is integrally disposedThere is vision camera, there are multiple detection sensors in the stroke of the clamping jaw, be stored with the view in the clamping jaw control equipmentFeel to be selected position of the camera under each default operating mode, which comprises
The clamping jaw is obtained in target stroke from the vision camera institute on starting point mobile displacement data and the clamping jawFollow-up location of the vision collecting data and the vision camera of acquisition under target operational mode, wherein the vision is adoptedCollection data include the clamping jaw in target stroke from the video stream data in starting point moving process;
The detection sensor azimuth recording stored according to the displacement data and the displacement data and a upper control period itBetween corresponding relationship determine corresponding object detection sensor and the detection orientation to be adjusted of the object detection sensor, and controlIt makes the object detection sensor and is adjusted to the detection orientation to be adjusted, be located at the object detection sensor when the clamping jawWhen the valid analysing range in detection orientation to be adjusted, after the location information that the clamping jaw is obtained by the object detection sensor,First control information of the clamping jaw in the target stroke is determined according to the location information of the clamping jaw of acquisition;
According to vision collecting data acquired in the vision camera and the follow-up location, the of the vision camera is determinedTwo control information;
The clamping jaw is controlled according to the first control information and the second control information.
In a kind of possible example, the control object detection sensor is adjusted to the detection orientation to be adjusted,Further include:
It executes and counts instruction, include that the object detection sensor is multiple fixed during orientation adjustment in the counting instructionWhen device numerical value;
It is repeatedly counted according to the counting instruction execution, and letter is interrupted in triggering when each counting reaches corresponding timer valueNumber;
The object detection sensor, which is controlled, according to the interrupt signal triggered every time adjusts corresponding angle and/or displacement;
The execution counts the step of instruction, comprising:
The controlling curve that the object detection sensor was directed in a current upper control period is read, and bent according to the controlLine calculates timer value of object detection sensor during adjusting orientation;
The counting instruction is executed based on the timer value;
It is described according to the controlling curve, calculate timer value of object detection sensor during adjusting orientationStep, comprising:
According to the controlling curve, each control period and each control period for obtaining the object detection sensor are correspondingAngular unit and/or displacement unit;
Institute is calculated according to each control period and corresponding angular unit of each control period and/or displacement unitState multiple timer values of the object detection sensor in adjustment orientation.
In a kind of possible example, the vision collecting data according to acquired in the vision camera and it is described afterContinuous position determines the step of the second of the vision camera controls information, comprising:
According to vision collecting data acquired in the vision camera and the follow-up location, obtained by forward estimation algorithmThe vision camera it is candidate towards angle and each candidate towards displacement point corresponding to angle and moment point;
According to vision collecting data acquired in the vision camera, the two type neuro-fuzzy classifiers obtained using preparatory trainingThe motion information of the vision camera is analyzed, and the extraction of motion information vision camera historical track based on the vision cameraInformation;
The vision camera historical track information and predetermined movement track are subjected to matching judgment, in the vision camera history railWhen mark information is matched with predetermined movement track, trace information according to the pre-stored data and prestore control information between corresponding relationshipAnd the corresponding relationship prestored between control information and PREDICTIVE CONTROL information is converted to the vision camera trace informationSecond control information of the corresponding vision camera.
In a kind of possible example, the vision collecting data according to acquired in the vision camera, using preparatoryThe step of two type neuro-fuzzy classifiers that training obtains analyze the motion information of the vision camera, comprising:
Greyscale visual acquisition data are converted by vision collecting data acquired in the vision camera, and by the greyscale visualIt acquires after data carry out binary conversion treatment and carries out HOG feature extraction, obtain HOG feature extraction data, the two types fuzzy neuralClassifier is obtained by the training of vision collecting HOG feature samples;
According to the two type neuro-fuzzy classifiers constructed in advance to the HOG feature extraction data carry out sensation target with it is non-visionThe two-value of target is adjudicated, and two-value court verdict is obtained;
Detected sensation target is tracked according to the two-value court verdict, obtains the sensation target relative to instituteState the opposite tracking data of clamping jaw;
Go out the motion information of the vision camera according to the opposite tracking data analysis.
In a kind of possible example, it is described according to the two-value court verdict to detected sensation target carry out withTrack, the step of obtaining opposite tracking data of the sensation target relative to the clamping jaw, comprising:
Sensation target image is determined according to the two-value court verdict, and the pixel value value of the sensation target image is transformed intoThe pixel value of HSV space, to calculate the sensation target image in the feature histogram of H component;
The sensation target image is converted color probability point by feature histogram according to the sensation target image in H componentCloth image;
Centroid position of the color probability distribution image in preset search window is calculated, and whether judges the centroid positionIt is moved to the centroid position of the fixation position of the preset search window;
If the centroid position is moved to the centroid position of the fixation position of the preset search window, by the mass center positionSet the target position as the sensation target;
The sensation target is obtained relative to the opposite of the clamping jaw according to the target position of obtained multiple sensation targetsTracking data.
In a kind of possible example, training obtains the two types neuro-fuzzy classifier in the following manner:
Vision collecting HOG feature samples are obtained ahead of time, and the vision collecting HOG feature samples of acquisition are stored in tranining databaseIn;
The information for marking each vision collecting HOG feature samples in the tranining database obtains mark feature samples;
Each mark feature samples are identified using initial neural classifier, obtain the vision mesh in each mark feature samplesMark obtains image anticipation information;
According to two type neuro-fuzzy classifiers described in described image anticipation information update.
It is described according to the first control information and the second control information control in a kind of possible exampleThe step of clamping jaw, comprising:
Signal is controlled according to the first three-dimensional coordinate that the first control information and the second control information generate the clamping jawSignal is controlled with the second three-dimensional coordinate of the vision camera;
Signal is controlled according to first three-dimensional coordinate and drives the clamping jaw displacement, and is controlled and believed according to second three-dimensional coordinateNumber driving vision camera displacement.
According to another aspect of an embodiment of the present invention, a kind of clamping jaw control device of integrated vision system is provided, is applied toClamping jaw controls equipment, and the clamping jaw control equipment is used to control the clamping jaw in stroke, and the retained part of the clamping jaw is integrally disposedThere is vision camera, there are multiple detection sensors in the stroke of the clamping jaw, be stored with the view in the clamping jaw control equipmentFeel to be selected position of the camera under each default operating mode, described device includes:
Module is obtained, for obtaining the clamping jaw in target stroke from the displacement data and the clamping jaw that starting point is mobileVision camera acquired in the follow-up location of vision collecting data and the vision camera under target operational mode,In, the vision collecting data include the clamping jaw in target stroke from the video stream data in starting point moving process;
First determining module, the detection for being stored according to the displacement data and the displacement data and a upper control periodCorresponding relationship between sensor orientation record determine corresponding object detection sensor and the object detection sensor toAdjustment detection orientation, and control the object detection sensor and be adjusted to the detection orientation to be adjusted, it is located at when the clamping jawWhen the valid analysing range in the detection orientation to be adjusted of the object detection sensor, by described in object detection sensor acquisitionAfter the location information of clamping jaw, first control of the clamping jaw in the target stroke is determined according to the location information of the clamping jaw of acquisitionInformation processed;
Second determining module, for the vision collecting data according to acquired in the vision camera and the follow-up location, reallySecond control information of the fixed vision camera;
Control module, for controlling the clamping jaw according to the first control information and the second control information.
According to another aspect of an embodiment of the present invention, a kind of clamping jaw control equipment is provided, the clamping jaw control equipment is used forThe clamping jaw in stroke is controlled, the retained part of the clamping jaw is integrally disposed vision camera, has in the stroke of the clamping jaw moreA detection sensor, the clamping jaw control equipment includes processor, readable storage medium storing program for executing and computer program, the computerProgram is stored in the readable storage medium storing program for executing, when processor operation, is executed and is stored in the readable storage medium storing program for executingThe step of computer program is to realize the clamping jaw control method such as above-mentioned integrated vision system.
According to another aspect of an embodiment of the present invention, a kind of readable storage medium storing program for executing is provided, is stored on the readable storage medium storing program for executingThere is computer program, the clamping jaw controlling party of above-mentioned integrated vision system can be executed when which is run by processorThe step of method.
Compared to existing technologies, the clamping jaw control method of integrated vision system provided in an embodiment of the present invention, deviceAnd clamping jaw controls equipment, has by the integrally disposed vision camera of retained part in clamping jaw, and in the stroke for passing through clamping jawThe design of multiple detection sensors, by combining the control information of clamping jaw and camera to clamping jaw in full row in clamping jaw control processAny position of journey accurately detects and controls, and by the integrally disposed design with the vision camera in clamping jaw, when in useCan auto-manual control vision camera to adjust the correlation between vision camera and clamping jaw, greatly improve debugging efficiencyIn terms of adjustment accuracy, cost of labor and time cost are reduced.
To enable the above objects, features, and advantages of the embodiment of the present invention to be clearer and more comprehensible, below in conjunction with embodiment, andCooperate appended attached drawing, elaborates.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attachedFigure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pairThe restriction of range for those of ordinary skill in the art without creative efforts, can also be according to thisA little attached drawings obtain other relevant attached drawings.
Fig. 1 is the application scenarios schematic block diagram of the clamping jaw control method of integrated vision system provided in an embodiment of the present invention;
Fig. 2 shows the component diagrams of the control equipment of clamping jaw shown in FIG. 1 provided by the embodiment of the present invention;
Fig. 3 shows the flow diagram of the clamping jaw control method of integrated vision system provided by the embodiment of the present invention;
Fig. 4 shows the flow diagram for each sub-steps that step S130 shown in Fig. 3 includes;
Fig. 5 shows the functional block diagram of the clamping jaw control device of integrated vision system provided by the embodiment of the present invention.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present inventionAttached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only thisInvention a part of the embodiment, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art existEvery other embodiment obtained under the premise of creative work is not made, shall fall within the protection scope of the present invention.
Description and claims of this specification and term " first " in above-mentioned attached drawing, " second ", " third " etc. are (such asFruit presence) it is to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that making in this wayData are interchangeable under appropriate circumstances, so that the embodiment of the present invention described herein for example can be in addition to hereinSequence other than those of diagram or description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended thatBe to cover it is non-exclusive include, for example, containing the process, method, system, product or equipment of a series of steps or units notThose of be necessarily limited to be clearly listed step or unit, but may include be not clearly listed or for these processes, sideThe intrinsic other step or units of method, product or equipment.
Fig. 1 is the application scenarios schematic diagram of the clamping jaw control method of integrated vision system provided in an embodiment of the present invention.InstituteStating has multiple detection sensors 300 in the stroke of clamping jaw, multiple detection sensors 300 and clamping jaw 200 pass through network respectivelyIt is communicated to connect with clamping jaw control equipment 100.The retained part of the clamping jaw 200 is integrally disposed vision camera 250, the clamping jawTo be selected position of the vision camera 250 under each default operating mode is stored in control equipment 100.
Application scenarios shown in FIG. 1 are only a kind of feasible example, in other feasible embodiments, the application scenariosIt can only include a portion of component part shown in Fig. 1 or can also include other component parts.
Clamping jaw control equipment 100 can access the data being stored in detection sensor 300 and clamping jaw 200 via network and believeBreath.Network can be used for the exchange of information and/or data.Network can be any kind of wired perhaps wireless network orIt is their combination.Only as an example, network may include cable network, wireless network, fiber optic network, telecommunications network,Intranet, internet, local area network (Local Area Network, LAN), wide area network (Wide Area Network, WAN), nothingLine local area network (Wireless Local Area Networks, WLAN), Metropolitan Area Network (MAN) (Metropolitan Area Network,MAN), wide area network (Wide Area Network, WAN), public telephone switching network (Public Switched TelephoneNetwork, PSTN), blueteeth network, ZigBee-network or near-field communication (Near Field Communication, NFC) netNetwork etc..
Fig. 2 shows the example components schematic diagrames of the control equipment 100 of clamping jaw shown in Fig. 1.Clamping jaw controls equipment 100It may include one or more processors 104, such as one or more central processing unit (CPU), each processing unit can be withRealize one or more hardware threads.Clamping jaw, which controls equipment 100, to include any readable storage medium storing program for executing 106, be used to depositStore up any kind of information of such as code, setting, data or the like.It is unrestricted, for example, readable storage medium storing program for executing 106 canTo include any one of following or multiple combinations: any kind of RAM, any kind of ROM, flash memory device, hard disk, CD etc..More generally, any storage medium can store information using any technology.Further, any storage medium can mentionVolatibility or non-volatile reservation for information.Further, any storage medium can indicate consolidating for clamping jaw control equipment 100Fixed or removable member.In one case, when the execution of processor 104 is stored in the group of any storage medium or storage mediumWhen associated instruction in conjunction, clamping jaw control equipment 100 can execute any operation of associated instructions.Clamping jaw controls equipment100 further include one or more driving units 108 for interacting with any storage medium, such as harddisk driving unit, CDDriving unit etc..
It further includes input/output 110(I/O that clamping jaw, which controls equipment 100), it is (single via input to be used to receive various inputsMember is 112) and for providing various outputs (via output unit 114)).One specific output mechanism may include display device116 and associated graphical user interface (GUI) 118.It can also include one or more network interfaces that clamping jaw, which controls equipment 100,120, it is used to exchange data with other equipment via one or more communication units 122.One or more communication bus 124 willComponent as described above is coupled.
Communication unit 122 can be realized in any way, for example, passing through local area network, wide area network (for example, internet), pointTo connection etc., or any combination thereof.Communication unit 122 may include the hardwired chain dominated by any agreement or combination of protocolsAny combination of road, Radio Link, router, gateway function, title clamping jaw control equipment 100 etc..
Fig. 3 shows the flow diagram of the clamping jaw control method of integrated vision system provided in an embodiment of the present invention, shouldThe clamping jaw control method of integrated vision system can the clamping jaw as shown in Fig. 1 control equipment 100 execute, the integrated vision systemThe detailed step of clamping jaw control method is described below.
Step S110 obtains the clamping jaw 200 in target stroke from the displacement data and the folder that starting point is mobileVision collecting data acquired in vision camera 250 on pawl 200 and the vision camera 250 are under target operational modeFollow-up location, wherein the vision collecting data include the clamping jaw 200 in target stroke from starting point moving processVideo stream data.
Step S120 was sensed according to the detection that the displacement data and the displacement data and a upper control period are storedCorresponding relationship between 300 azimuth recording of device determines corresponding object detection sensor 300 and the object detection sensor 300Detection orientation to be adjusted, and control the object detection sensor 300 and be adjusted to the detection orientation to be adjusted, working as the folderWhen pawl 200 is located at the valid analysing range in detection orientation to be adjusted of the object detection sensor 300, passed by the target detectionAfter sensor 300 obtains the location information of the clamping jaw 200, clamping jaw 200 is determined according to the location information of the clamping jaw 200 of acquisitionThe first control information in the target stroke.
Step S130 is determined according to vision collecting data acquired in the vision camera 250 and the follow-up locationSecond control information of the vision camera 250.
Step S140 controls the clamping jaw 200 according to the first control information and the second control information.
Based on above-mentioned design, the present embodiment by the integrally disposed vision camera 250 of retained part in clamping jaw 200, andBy, with the design of multiple detection sensors 300, being pressed from both sides in 200 control process of clamping jaw by combining in the stroke of clamping jaw 200The control information of pawl 200 and camera accurately detects and controls clamping jaw 200 in any position of total travel, and is set by integratedSet the design with the vision camera 250 in clamping jaw 200, when in use can auto-manual control vision camera 250 with adjust viewFeel the correlation between camera 250 and clamping jaw 200, in terms of greatly improving debugging efficiency and adjustment accuracy, reduce manually atSheet and time cost.
As a kind of possible embodiment, in the step s 120, the present embodiment can also judge that the target detection sensesWhether the valid analysing range in the detection orientation to be adjusted of device 300 can reach setting detection range.If the object detection sensorThe valid analysing range in 300 detection orientation to be adjusted cannot reach setting detection range, then according to the displacement data and instituteShift number readjusts institute according to the corresponding relationship between 300 azimuth recording of detection sensor stored with a upper control periodThe detection orientation to be adjusted of object detection sensor 300 is stated, until effective detection in the detection orientation to be adjusted after readjustingRange reaches setting detection range.In this way, the accuracy of detection orientation adjustment to be adjusted can be improved.
As a kind of possible embodiment, in the step s 120, the present embodiment can control described in the following mannerObject detection sensor 300 is adjusted to the detection orientation to be adjusted:
It in detail, can be by rotating direction, and/or adjusting locating for the current detection orientation of the object detection sensor 300The mode of gear locating for the current detection orientation of the object detection sensor 300, makes determining for the object detection sensor 300Bit flag is directed at setting corresponding with the displacement data and detects orientation;Wherein, the witness marker, setting are examined in the targetIt surveys on the body construction of sensor 300 and/or on peripheral structure, and the level for determining the object detection sensor 300Visual angle and/or vertical angle of view.
As a kind of possible embodiment, in the step s 120, the present embodiment can control described in the following mannerObject detection sensor 300 is adjusted to the detection orientation to be adjusted:
Instruction is counted firstly, executing, includes the object detection sensor 300 during orientation adjustment in the counting instructionMultiple timer values.For example, the control for being directed to the object detection sensor 300 in a current upper control period can be readKoji-making line, and according to the controlling curve, calculate timer number of the object detection sensor 300 during adjusting orientationValue is then based on the timer value and executes the counting instruction.
Then, it is repeatedly counted according to the counting instruction execution, and when each counting reaches corresponding timer valueTrigger interrupt signal.On this basis, the object detection sensor 300 can be controlled according to the interrupt signal triggered every time to adjustWhole corresponding angle and/or displacement.In this way, can effectively improve the precision that the object detection sensor 300 adjusts.
Optionally, it according to the controlling curve, calculates the object detection sensor 300 and determines during adjusting orientationWhen device numerical value mode may is that
According to the controlling curve, each control period and each control period of the object detection sensor 300 are obtainedCorresponding angular unit and/or displacement unit.It on this basis, can be according to each control period and each control weekIt is multiple in adjustment orientation that the object detection sensor 300 is calculated in phase corresponding angular unit and/or displacement unitTimer value.
Also, in order to further improve the accuracy of controlling curve, the present embodiment can also be looked into from correction databaseMultiple calibration coordinate points identical with the angle direction and/or direction of displacement currently adjusted are looked for, and are looked into from the controlling curveLook for multiple corresponding theoretical coordinate points.Then, multiple calibration coordinate points and the corresponding theoretical coordinate of each calibration coordinate point are based onPoint is calculated aberration curve using piecewise linear interpolation algorithm, and is mended based on the aberration curve to the controlling curveIt repays, generate compensated controlling curve and is stored.
Further, whether the calibration coordinate point that can also be detected in the correction database reaches the upper limit.If so,At the end of the orientation adjustment for being directed to the object detection sensor 300 next time, search what needs covered from the correction databaseTarget correction coordinate points, wherein the target correction coordinate points are identical as the angle direction of this adjustment and/or direction of displacementAnd the nearest calibration coordinate point with the position apart from this calibration coordinate point.The target correction coordinate points are replaced with into this schoolPositive coordinate point.In this way, can be caused to avoid the calibration coordinate point in the premature control period to the accuracy of current controlling curveIt influences, further improves the precision that the object detection sensor 300 adjusts.
Further, for step S130, further referring to Fig. 4, step S130 can be real by following sub-stepIt is existing:
Sub-step S131, according to vision collecting data acquired in the vision camera 250 and the follow-up location, by precedingThe candidate towards angle and each candidate towards displacement point corresponding to angle of the vision camera 250 is obtained to algorithm for estimatingAnd moment point.
Sub-step S132 is obtained according to vision collecting data acquired in the vision camera 250 using preparatory trainingTwo type neuro-fuzzy classifiers analyze the motion information of the vision camera 250, and the movement based on the vision camera 250Information extraction vision camera historical track information.
In the present embodiment, greyscale visual can be converted by vision collecting data acquired in the vision camera 250 and adoptedCollect data, and carry out HOG feature extraction after greyscale visual acquisition data are carried out binary conversion treatment, obtains HOG feature and mentionAccess evidence, the two types neuro-fuzzy classifier are obtained by the training of vision collecting HOG feature samples.Then, according to preparatory structureThe two type neuro-fuzzy classifiers made carry out sensation target to the HOG feature extraction data and the two-value of non-vision target is sentencedCertainly, two-value court verdict is obtained, and detected sensation target is tracked according to the two-value court verdict with this, is obtainedOpposite tracking data of the sensation target relative to the clamping jaw 200 is obtained, is gone out later according to the opposite tracking data analysisThe motion information of the vision camera 250.
In detail, sensation target image can be determined according to the two-value court verdict, and by the sensation target imagePixel value value be transformed into the pixel value of HSV space, to calculate the sensation target image in the feature histogram of H component.Then, the sensation target image is converted color probability by the feature histogram according to the sensation target image in H componentAfter distributed image, centroid position of the color probability distribution image in preset search window is calculated, and judge the mass centerWhether position is moved to the centroid position of the fixation position of the preset search window.If the centroid position is moved to describedThe centroid position of the fixation position of preset search window, then using the centroid position as the target position of the sensation target,In this way, the sensation target can be obtained relative to the clamping jaw according to the target position of obtained multiple sensation targets200 opposite tracking data.
Optionally, the two types neuro-fuzzy classifier can train in the following manner obtains:
Firstly, vision collecting HOG feature samples are obtained ahead of time, and the vision collecting HOG feature samples of acquisition are stored in trainingIn database.Then, the information for marking each vision collecting HOG feature samples in the tranining database obtains mark feature sampleThis.Then, each mark feature samples are identified using initial neural classifier, obtains the view in each mark feature samplesFeel target, obtain image anticipation information, finally the two type neuro-fuzzy classifiers according to described image anticipation information update, andRepetitive exercise obtains the two types neuro-fuzzy classifier.
The vision camera historical track information and predetermined movement track are carried out matching judgment, in institute by sub-step S133When stating vision camera historical track information and matching with predetermined movement track, trace information according to the pre-stored data and control letter is prestoredCorresponding relationship, the corresponding relationship prestored between control information and PREDICTIVE CONTROL information and the vision camera between breath250 it is candidate towards angle and it is each it is candidate towards displacement point corresponding to angle and moment point by the vision camera 250Trace information is converted to the second control information of the corresponding vision camera 250.
In detail, trace information according to the pre-stored data and the corresponding relationship prestored between control information are available describedVision camera historical track information is corresponding to prestore control information, and control information and PREDICTIVE CONTROL information are then prestored according toBetween the corresponding PREDICTIVE CONTROL information of the available vision camera historical track information of corresponding relationship, then further according to instituteState the candidate towards angle and each candidate towards described in displacement point corresponding to angle and moment point determination of vision camera 250Candidate in PREDICTIVE CONTROL information is towards angle and each candidate towards corresponding to angle between displacement point and moment pointDifference then can be directly by the corresponding pre- observing and controlling of the vision camera historical track information when difference is when a certain rangeInformation processed controls information as the second of the vision camera 250.
For step S140, the clamping jaw can be generated according to the first control information and the second control information200 the first three-dimensional coordinate control signal and the second three-dimensional coordinate of the vision camera 250 control signal, then basis respectivelyThe first three-dimensional coordinate control signal drives the clamping jaw 200 to be displaced, and controls signal according to second three-dimensional coordinate and driveThe vision camera 250 is moved to be displaced.
Fig. 5 shows the function mould of the clamping jaw control method device 400 of integrated vision system provided in an embodiment of the present inventionBlock figure, the function that the clamping jaw control method device 400 of the integrated vision system is realized can correspond to the step of above method executes.The clamping jaw control method device 400 of the integrated vision system can be understood as above-mentioned clamping jaw control equipment 100 or clamping jaw control is setStandby 100 processor, it is understood that control setting except equipment 100 or processor in clamping jaw control independently of above-mentioned clamping jawThe standby lower component for realizing function of the present invention of 100 control, as shown in figure 5, the clamping jaw control method device 400 of the integrated vision systemIt may include obtaining module 410, the first determining module 420, the second determining module 430 and control module 440, separately belowThe function of each functional module of the clamping jaw control method device 400 of the integrated vision system is described in detail.
Obtain module 410, for obtain the clamping jaw 200 in target stroke from the mobile displacement data of starting point andVision collecting data acquired in vision camera 250 on the clamping jaw 200 and the vision camera 250 are in target operationFollow-up location under mode, wherein the vision collecting data include that the clamping jaw 200 is mobile from starting point in target strokeVideo stream data in the process.
First determining module 420, for being deposited according to the displacement data and the displacement data with a upper control periodCorresponding relationship between 300 azimuth recording of detection sensor of storage determines corresponding object detection sensor 300 and the targetThe detection orientation to be adjusted of detection sensor 300, and control the object detection sensor 300 and be adjusted to the detection side to be adjustedPosition leads to when being located at the valid analysing range in detection orientation to be adjusted of the object detection sensor 300 when the clamping jaw 200After crossing the location information that the object detection sensor 300 obtains the clamping jaw 200, according to the position of the clamping jaw 200 of acquisitionInformation determines first control information of the clamping jaw 200 in the target stroke.
Second determining module 430, for the vision collecting data according to acquired in the vision camera 250 and it is described afterContinuous position determines the second control information of the vision camera 250.
Control module 440, for controlling the clamping jaw according to the first control information and the second control information200。
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In embodiment provided by the present invention, it should be understood that disclosed device and method, it can also be by otherMode realize.Device and method embodiment described above is only schematical, for example, flow chart and frame in attached drawingFigure shows the system frame in the cards of the system of multiple embodiments according to the present invention, method and computer program productStructure, function and operation.In this regard, each box in flowchart or block diagram can represent a module, section or codeA part, a part of the module, section or code includes one or more for implementing the specified logical functionExecutable instruction.It should also be noted that function marked in the box can also be with not in some implementations as replacementIt is same as the sequence marked in attached drawing generation.For example, two continuous boxes can actually be basically executed in parallel, they haveWhen can also execute in the opposite order, this depends on the function involved.It is also noted that in block diagram and or flow chartEach box and the box in block diagram and or flow chart combination, can function or movement as defined in executing it is dedicatedHardware based system realize, or can realize using a combination of dedicated hardware and computer instructions.
In addition, each functional module in each embodiment of the present invention can integrate one independent portion of formation togetherPoint, it is also possible to modules individualism, an independent part can also be integrated to form with two or more modules.
It can replace, can be realized wholly or partly by software, hardware, firmware or any combination thereof.WhenWhen using software realization, can entirely or partly it realize in the form of a computer program product.The computer program productIncluding one or more computer instructions.It is all or part of when loading on computers and executing the computer program instructionsGround is generated according to process or function described in the embodiment of the present invention.The computer can be general purpose computer, special purpose computer,Computer network or other programmable devices.The computer instruction may be stored in a computer readable storage medium, orPerson is transmitted from a computer readable storage medium to another computer readable storage medium, for example, the computer instructionWired (such as coaxial cable, optical fiber, digital subscriber can be passed through from a web-site, computer, server or data centerLine (DSL)) or wireless (such as infrared, wireless, microwave etc.) mode to another web-site, computer, server or dataIt is transmitted at center.The computer readable storage medium can be any usable medium that computer can access and either wrapThe data storage devices such as server, the data center integrated containing one or more usable mediums.The usable medium can be magneticProperty medium, (for example, floppy disk, hard disk, tape), optical medium (for example, DVD) or semiconductor medium (such as solid state hard diskSolid State Disk (SSD)) etc..
It should be noted that, in this document, term " including ", " including " or its any other variant are intended to non-rowIts property includes, so that the process, method, article or equipment for including a series of elements not only includes those elements, andAnd further include the other elements being not explicitly listed, or further include for this process, method, article or equipment institute it is intrinsicElement.In the absence of more restrictions, the element limited by sentence " including one ... ", it is not excluded that including instituteState in the process, method, article or equipment of element that there is also other identical elements.
It is obvious to a person skilled in the art that invention is not limited to the details of the above exemplary embodiments, Er QieIn the case where without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matterFrom the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the present invention is by appended powerBenefit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent elements of the claimsVariation is included within the present invention.Any reference signs in the claims should not be construed as limiting the involved claims.

Claims (10)

First determining module, the detection for being stored according to the displacement data and the displacement data and a upper control periodCorresponding relationship between sensor orientation record determine corresponding object detection sensor and the object detection sensor toAdjustment detection orientation, and control the object detection sensor and be adjusted to the detection orientation to be adjusted, it is located at when the clamping jawWhen the valid analysing range in the detection orientation to be adjusted of the object detection sensor, by described in object detection sensor acquisitionAfter the location information of clamping jaw, first control of the clamping jaw in the target stroke is determined according to the location information of the clamping jaw of acquisitionInformation processed;
CN201910532299.8A2019-06-192019-06-19Clamping jaw control method and device of integrated vision system and clamping jaw control equipmentActiveCN110142767B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910532299.8ACN110142767B (en)2019-06-192019-06-19Clamping jaw control method and device of integrated vision system and clamping jaw control equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910532299.8ACN110142767B (en)2019-06-192019-06-19Clamping jaw control method and device of integrated vision system and clamping jaw control equipment

Publications (2)

Publication NumberPublication Date
CN110142767Atrue CN110142767A (en)2019-08-20
CN110142767B CN110142767B (en)2022-04-12

Family

ID=67595922

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910532299.8AActiveCN110142767B (en)2019-06-192019-06-19Clamping jaw control method and device of integrated vision system and clamping jaw control equipment

Country Status (1)

CountryLink
CN (1)CN110142767B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114633898A (en)*2022-03-252022-06-17成都飞机工业(集团)有限责任公司Measuring method, device, equipment and medium for adjusting attitude of airplane component
CN116721958A (en)*2023-08-112023-09-08深圳市立可自动化设备有限公司Chip spacing adjustment method, clamping system and processor
CN119159571A (en)*2024-08-062024-12-20北京小雨智造科技有限公司 Robot system deployment method, device and robot system

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CA2647435A1 (en)*2006-03-272007-10-04Commissariat A L'energie AtomiqueIntelligent interface device for grasping of an object by a manipulating robot and method of implementing this device
CN106808485A (en)*2015-11-302017-06-09株式会社理光Steerable system, camera system, object delivery method
CN206393658U (en)*2017-01-172017-08-11慧灵科技(深圳)有限公司A kind of big stroke electronic clamping jaw of built-in controller
CN108080289A (en)*2018-01-222018-05-29广东省智能制造研究所Robot sorting system, robot sorting control method and device
CN108247623A (en)*2018-01-202018-07-06广州通誉智能科技有限公司A kind of manipulator opening and closing diameter adjusting mechanism and its intelligent adjusting method
CN109227577A (en)*2018-11-182019-01-18大连四达高技术发展有限公司binocular vision positioning system actuator

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CA2647435A1 (en)*2006-03-272007-10-04Commissariat A L'energie AtomiqueIntelligent interface device for grasping of an object by a manipulating robot and method of implementing this device
CN106808485A (en)*2015-11-302017-06-09株式会社理光Steerable system, camera system, object delivery method
CN206393658U (en)*2017-01-172017-08-11慧灵科技(深圳)有限公司A kind of big stroke electronic clamping jaw of built-in controller
CN108247623A (en)*2018-01-202018-07-06广州通誉智能科技有限公司A kind of manipulator opening and closing diameter adjusting mechanism and its intelligent adjusting method
CN108080289A (en)*2018-01-222018-05-29广东省智能制造研究所Robot sorting system, robot sorting control method and device
CN109227577A (en)*2018-11-182019-01-18大连四达高技术发展有限公司binocular vision positioning system actuator

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114633898A (en)*2022-03-252022-06-17成都飞机工业(集团)有限责任公司Measuring method, device, equipment and medium for adjusting attitude of airplane component
CN114633898B (en)*2022-03-252024-02-23成都飞机工业(集团)有限责任公司Measurement method, device, equipment and medium for attitude adjustment of aircraft component
CN116721958A (en)*2023-08-112023-09-08深圳市立可自动化设备有限公司Chip spacing adjustment method, clamping system and processor
CN116721958B (en)*2023-08-112024-02-06深圳市立可自动化设备有限公司Chip spacing adjustment method, clamping system and processor
CN119159571A (en)*2024-08-062024-12-20北京小雨智造科技有限公司 Robot system deployment method, device and robot system

Also Published As

Publication numberPublication date
CN110142767B (en)2022-04-12

Similar Documents

PublicationPublication DateTitle
CN109446942A (en)Method for tracking target, device and system
Luber et al.People tracking in rgb-d data with on-line boosted target models
KR101533686B1 (en)Apparatus and method for tracking gaze, recording medium for performing the method
KR100612858B1 (en) Method and apparatus for tracking people using robots
CN110142767A (en)A kind of clamping jaw control method of integrated vision system, device and clamping jaw control equipment
KR20090037275A (en) Human body detection device and method
US20200327681A1 (en)Target tracking method and apparatus
CN106600631A (en)Multiple target tracking-based passenger flow statistics method
EP3035235B1 (en)Method for setting a tridimensional shape detection classifier and method for tridimensional shape detection using said shape detection classifier
CN113192105B (en)Method and device for indoor multi-person tracking and attitude measurement
WO2016114134A1 (en)Motion condition estimation device, motion condition estimation method and program recording medium
CN103208008A (en)Fast adaptation method for traffic video monitoring target detection based on machine vision
JP2007128513A (en)Scene analysis
WO2015025704A1 (en)Video processing device, video processing method, and video processing program
CN118887738B (en)Multi-dimensional intelligent analysis method for movement process, computer equipment and storage medium
CN106570440A (en)People counting method and people counting device based on image analysis
JP2011059898A (en)Image analysis apparatus and method, and program
Shirsat et al.Proposed system for criminal detection and recognition on CCTV data using cloud and machine learning
JP6977337B2 (en) Site recognition method, device, program, and imaging control system
CN110660080A (en) A multi-scale target tracking method based on learning rate adjustment and fusion of multi-layer convolutional features
JP2014085795A (en)Learning image collection device, learning device and object detection device
JP2010231254A (en) Image analysis apparatus, image analysis method, and program
CN118068962A (en) Method and device for realizing automatic switching of books in point reading
CN114639168A (en)Method and system for running posture recognition
KR101313879B1 (en)Detecting and Tracing System of Human Using Gradient Histogram and Method of The Same

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp