Movatterモバイル変換


[0]ホーム

URL:


CN109443199A - 3D information measurement system based on intelligent light source - Google Patents

3D information measurement system based on intelligent light source
Download PDF

Info

Publication number
CN109443199A
CN109443199ACN201811213081.8ACN201811213081ACN109443199ACN 109443199 ACN109443199 ACN 109443199ACN 201811213081 ACN201811213081 ACN 201811213081ACN 109443199 ACN109443199 ACN 109443199A
Authority
CN
China
Prior art keywords
image
light
light source
information
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811213081.8A
Other languages
Chinese (zh)
Other versions
CN109443199B (en
Inventor
左忠斌
左达宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianmu Aishi Beijing Technology Co Ltd
Original Assignee
Tianmu Aishi Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmu Aishi Beijing Technology Co LtdfiledCriticalTianmu Aishi Beijing Technology Co Ltd
Priority to CN201910862132.8ApriorityCriticalpatent/CN110567371B/en
Priority to CN201811213081.8Aprioritypatent/CN109443199B/en
Publication of CN109443199ApublicationCriticalpatent/CN109443199A/en
Application grantedgrantedCritical
Publication of CN109443199BpublicationCriticalpatent/CN109443199B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供了一种基于智能光源的3D信息测量系统、采集系统和方法,其中测量系统包括:光源装置,用于向目标物提供照明;图像采集装置,用于提供采集区域,采集目标物图像;检测装置,用于检测目标物多个区域反射光的特征;控制装置,用于根据检测装置发送的目标物反射光的特征改变光源装置发光特征,以使得目标物不同区域接收到的光照度大体相等;图像处理装置,用于接收图像采集装置发送的目标物图像得到目标物3D信息;测量装置,根据目标物3D信息测量其尺寸。首次提出“先进行3D合成,再利用3D点云数据测量”的测量手段,其精度不高、速度不快的重要原因在于图像采集时的光照影响。

The present invention provides a 3D information measurement system, acquisition system, and method based on an intelligent light source. The measurement system includes: a light source device for providing illumination for a target; an image acquisition device for providing an acquisition area and capturing an image of the target; a detection device for detecting characteristics of reflected light from multiple regions of the target; a control device for changing the light emission characteristics of the light source device based on the characteristics of the target reflected light transmitted by the detection device, so that the illumination received by different regions of the target is approximately equal; an image processing device for receiving the target image transmitted by the image acquisition device to obtain 3D information about the target; and a measurement device for measuring the dimensions of the target based on the 3D information. This is the first measurement method proposed to "first perform 3D synthesis, then use 3D point cloud data for measurement." The main reason for its low accuracy and slow speed is the influence of illumination during image acquisition.

Description

3D information measuring system based on intelligent light source
Technical field
The present invention relates to field of measuring technique, in particular to object length, appearance and size field of measuring technique.
Background technique
When carrying out object measurement, usually used mechanical means (such as graduated scale), electromagnetic method (such as electromagnetism codingDevice), optical means (such as laser range finder) and image method.But first synthetic body 3D point cloud data are seldom used at present,Object length, the mode of topography measurement are carried out again.Although it is any can to measure object after obtaining object 3D information for this modeSize, but technology prejudice existing for fields of measurement thinks that such measurement method is complicated and measuring speed is unhappy, precision is not high,It is main reason is that the optimization of composition algorithm is not in place.But it never refers to and passes through light control in 3D acquisition, synthesis, measurementTo improve speed and precision.
Although it is not brand new technical that light control technology is whole, also referred in general take pictures, in the prior artNever be applied to 3D acquisition, synthesis, in measurement, do not consider 3D acquisition, synthesis yet, in measurement process for light controlParticular/special requirement and specific condition, therefore can not also be equal and use.
Summary of the invention
In view of the above problems, it proposes on the present invention overcomes the above problem or at least be partially solved in order to provide one kindState 3D information measuring system, acquisition system and the method based on intelligent light source of problem.
The present invention provides a kind of 3D information measuring system based on intelligent light source, including
Light supply apparatus, for providing illumination to object;
Image collecting device acquires target object image for providing pickup area;
Detection device, for detecting the feature of object multiple regions reflected light;
Control device, the feature of the object reflected light for being sent according to detection device change the luminous spy of light supply apparatusSign, so that the illuminance that object different zones receive is generally equalized;
Image processing apparatus, the target object image for receiving image collecting device transmission obtain object 3D information;
Measuring device, according to its size of object 3D information measurement.
The present invention also provides a kind of 3D information acquisition systems based on intelligent light source, including
Light supply apparatus, for providing illumination to object;
Image collecting device acquires target object image for providing pickup area;
Detection device, for detecting the feature of object multiple regions reflected light;
Control device, the feature of the object reflected light for being sent according to detection device change the luminous spy of light supply apparatusSign, so that the illuminance that object different zones receive is generally equalized;
Image processing apparatus, the target object image for receiving image collecting device transmission obtain object 3D information.
The present invention also provides a kind of 3D information collecting method based on intelligent light source,
Light supply apparatus is opened, provides illumination to object;
Detection device detects the characteristic of object different zones reflected light, and sends it to control device;
Control device is according to the Characteristics Control light supply apparatus luminescence feature of object different zones reflected light, so that targetThe illuminance that object different zones receive is generally equalized;
Image acquisition device object multiple directions image, and send it to image processing apparatus;
Image processing apparatus receives the object multiple images that image collecting device is sent and synthesizes object 3D information.
Optionally, image processing apparatus and control device are the same part;And/or image collecting device is with detection deviceThe same part.
Optionally, the luminescence feature are as follows: luminous intensity, shine illumination, light emission color temperature, emission wavelength, light emission direction, hairOptical position and/or their arbitrary combinations.
Optionally, the feature of the reflected light are as follows: reflected light light intensity, reflection illuminance, reflection light color temperature, reflecting lightLong, reflection optical position, the reflected light uniformity, the acutance of reflected image, the clarity of reflected image, the contrast of reflected imageAnd/or their arbitrary combinations.
Optionally, light supply apparatus includes multiple sub-light sources, or for that can provide photograph to object different zones from different directionsBright integrated light source.
Optionally, multiple sub-light sources of light supply apparatus are located at the different location around object.
Optionally, multiple sub-light sources or integrated light source configure so that object multiple regions by illumination illuminance substantiallyIt is equal.
Optionally, image collecting device acquires object multiple images by the relative motion of pickup area and object.
Optionally, acquire multiple images when image collecting device position at least meet two neighboring position at least conform to asLower condition:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<0.8;
Wherein L is distance of the image collecting device to object, and H is object actual size in acquired image, and a isTwo neighboring location drawing picture acquisition device optical axis included angle, m are coefficient.
Optionally, adjacent three positions satisfaction of image collecting device acquires on corresponding position when acquiring multiple imagesAt least there is the part for indicating object the same area in three images.
Inventive point and technical effect
1, the measurement means of " first carrying out 3D synthesis, recycle 3D point cloud DATA REASONING " are put forward for the first time, precision is not high, fastSpend illumination effect when unhappy major reason is Image Acquisition.
2, it is put forward for the first time the quality and speed in order to guarantee 3D acquisition synthesis, is contemplated that the light that object receives and emitsThe uniformity of illumination adjusts illumination apparatus by the mutual cooperation of control device and detection device, to improve 3D acquisition synthesisQuality and speed.
3, be put forward for the first time in 3D is acquired and synthesized, using multiple light courcess or can the luminous integrated light source of multi-angle, guaranteeRelatively uniform illuminance, to improve measurement accuracy and speed.
4, it is put forward for the first time in 3D is acquired and synthesized, object multiple regions, the illuminance of multiple angles are generally equalized (Even property), to improve the quality and speed of 3D acquisition synthesis.
5, it proposes in 3D collection process, the optimum position condition of camera further increases measurement accuracy and speed.
Detailed description of the invention
By reading the following detailed description of the preferred embodiment, various other advantages and benefits are common for this fieldTechnical staff will become clear.The drawings are only for the purpose of illustrating a preferred embodiment, and is not considered as to the present inventionLimitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 is that 3D information measurement/a kind of embodiment of acquisition system based on intelligent light source is shown in the embodiment of the present invention 1It is intended to;
Fig. 2 is 3D information measurement/acquisition system another embodiment based on intelligent light source in the embodiment of the present invention 1Schematic diagram;
Fig. 3 is 3D information measurement/acquisition system schematic diagram in the embodiment of the present invention 2;
Fig. 4 is the schematic diagram of camera follow shot status requirement in the embodiment of the present invention 2;
Fig. 5 is in the embodiment of the present invention 3 using a kind of schematic diagram of implementation of one camera rotating acquisition;
Fig. 6 is in the embodiment of the present invention 3 using the schematic diagram of second of implementation of one camera rotating acquisition;
Fig. 7 is in the embodiment of the present invention 3 using the schematic diagram of the third implementation of one camera rotating acquisition;
Fig. 8 is in the embodiment of the present invention 3 using the schematic diagram of the 4th kind of implementation of one camera rotating acquisition;
Fig. 9 is in the embodiment of the present invention 3 using the schematic diagram of the 5th kind of implementation of one camera rotating acquisition;
Figure 10 is in the embodiment of the present invention 3 using the schematic diagram of the 6th kind of implementation of one camera rotating acquisition;
Figure 11 is a kind of implementation for acquiring iris 3D information acquisition device in the embodiment of the present invention 4 using light deflectionSchematic diagram;
Figure 12 is the second various realizations for acquiring iris 3D information acquisition device in the embodiment of the present invention 4 using light deflectionThe schematic diagram of mode;
Figure 13 is the third the realization side for acquiring iris 3D information acquisition device in the embodiment of the present invention 4 using light deflectionThe schematic diagram of formula.
Description of symbols:
201 image collecting devices, 300 objects, 500 control devices, 600 light sources, 400 processors, 700 detection devices,101 tracks, 100 image processing apparatus, 102 mechanical mobile devices, 202 rotary shafts, 203 shaft driving devices, 204 lifting dressesIt sets, 205 lifting drives, 4 controlling terminals, 211 light deflection units, 212 light deflection driving units.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawingExemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth hereIt is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosureIt is fully disclosed to those skilled in the art.
Embodiment 1 (light source control)
Including image collecting device 201, object 300, control device 500, light source 600, processor 400, detection device700.Please refer to Fig. 1 and Fig. 2.
Object 300 can for the iris comprising biological characteristic, face, hand, etc. human organs or region or entire peopleBody, or various animals and plants entirety or region can also be non-life body with appearance profile object (such as handTable).
Image collecting device 201 can be that polyphaser matrix, fixed one camera, video camera, rotation one camera etc. can be realThe equipment of existing Image Acquisition.Its image for being used to acquire object 300.Two-dimension human face measurement can no longer meet at present with identificationIn high precision, the acquisition, measurement, identification of high accuracy require, therefore the present invention using virtual camera matrix it is also proposed that realized three-dimensionalIris capturing.Collected plurality of pictures is sent into processor 400 and carries out image procossing synthesis by image collecting device 201 at this time(specific method is referring to following embodiments) form 3-D image and point cloud data.
Light source 600 is used to provide illumination to object 300, so that object region to be collected is illuminated, and illuminance is bigIt causes identical.Light source 600 may include multiple sub-light sources 601, or provide photograph to object different zones from different directionsBright integrated light source 602.Due to the bumps of object profile, the needs of light source 600 guarantee provide illumination in different directions,It can realize the uniformity of 300 different zones illuminance of object.Different, the light source according to the region to be collected of object 300600 can be set different shapes.Such as need to acquire hand 3D information, then the sub-light source 601 of light source 600 should surround handPortion forms full encirclement structure;It such as needs to acquire face's 3D information, is wrapped then the integrated light source 602 of light source forms half around faceClosed structure.It is appreciated that either sub-light source 601 or integrated light source 602 can not only exist only in a sectionIt is interior, and the two also can be combined with each other use.Such as when acquiring face 3D, if only half-turn light source, under faceBar region will form shade, cause illumination different.It needs that integrated light source or son are arranged again in existing 602 lower part of half-turn light source at this timeLight source, to illuminate chin area.
Preferably, for each sub-light source 601, the luminous of itself should also be as meeting certain uniformity requirement.ButExcessively require the uniformity of sub-light source 601 that cost can be greatly improved.According to many experiments, preferably each sub-light source is in luminous radiusHalf in the range of have uniform illuminance.
Detection device 700 is used to detect the illuminance of 300 different zones of object reflection, such as in face acquisition, byBlock that nose two sides light is less, and illuminance is relatively low in nose.Detection device 700 receives the reflection of nose two sides at this timeLight measures its illuminance or reflective light intensity, sends it to controller 500, at the same also by the illuminance of facial other parts orReflective light intensity is sent to controller 500, and controller 500 carries out the comparison of multiple regions illuminance or light intensity, distinguish illuminance/The non-uniform region of light intensity (such as nose two sides), and corresponding sub-light source 601 is controlled according to the information and improves luminous intensity, exampleAs the sub-light source 601 of main irradiation nose two sides improves luminous intensity.Preferably, sub-light source 601 includes mobile device, controller500 can improve or weaken the light intensity or illumination of corresponding region by the position and angle for controlling sub-light source.Detection device 700 is examinedLight intensity/illumination that object 300 reflects is surveyed, light intensity/illumination of the light source received with this approximate target object 300 is whole in objectIt is acceptable (error rate is within 10%) by lot of experiment validation, and can make in the approximate situation of body materialMust control it is simpler, to prevent the complexity of control system.Such as when acquiring face 3D information, due to skin light-reflecting propertyIt is relatively uniform, therefore the received light intensity of face and the light intensity of reflection have relatively-stationary relationship.Therefore it is examined with detection device 700Light intensity/the illumination for surveying face reflection is appropriately that this is also one of inventive point of the invention.
It is appreciated that reflected light light intensity, the reflection illuminance, reflection for surveying the detection object 300 of device 700 can also be utilizedIt is light color temperature, reflected light wavelength, reflection optical position, the reflected light uniformity, the acutance of reflected image, the clarity of reflected image, anti-The contrast and/or their arbitrary combinations for penetrating image, to control the luminous intensity of light source 600, shine illumination, illuminant colourTemperature, emission wavelength, light emission direction, luminous position and/or their arbitrary combinations.
Therefore detection device 700 can be the device for specially measuring above-mentioned parameter, or CCD, CMOS, camera, take the photographThe image capture devices such as camera.In the case of it is therefore preferable that, detection device 700 and image collecting device 201 can be same portionPart, i.e. image collecting device 201 realize the function of detection device 700, detect the optical characteristics of object 300.In object 300Image Acquisition before, first detect whether 300 illumination condition of object meets the requirements using image collecting device 201, and pass through controlLight source processed realizes suitable illumination condition, and then image collecting device 201 starts to acquire again looks for picture for 3D synthesis more.
The 3D information for the multiple pictures synthesis object 300 that processor 400 is used to be acquired according to image collecting device 201,Here 3D information include 3D rendering, 3D point cloud, 3D grid, part 3D feature, 3D size and all with object 3D featureParameter.It is appreciated that controller 500 and processor 400 can realize two functions for same device, or different dressesIt sets, realizes control and image procossing respectively.This can be depending on actual chips function, performance.
In the prior art it has been generally acknowledged that when being acquired, synthesized and measured using 3D, speed is unhappy, precision is not high masterReason is wanted to be that the optimization of composition algorithm is not in place.But it never refers to and being mentioned in 3D acquisition, synthesis, measurement by light controlHigh speed and precision.And in fact, can be improved the speed and precision of synthesis really by the optimization of algorithm, but effect is still notIdeal, the speed and quality difference especially synthesized in different application are larger.If advanced optimizing algorithm, needDifferent optimization is carried out for different occasions, difficulty is higher.Applicant passes through optimization illumination condition by many experiments discovery,Aggregate velocity and quality can be greatly improved.This feature is very different with 2D information collection.2D information collection illumination conditionPicture quality is only influenced, but not influences acquisition speed, and picture can also be modified by the later period.And applicant passes through realityIssue after examination and approval existing, when 3D information collection, optimization illumination condition aggregate velocity can be promoted significantly.For details, reference can be made to following tables.
Embodiment 2
In order to solve the above technical problems, one embodiment of the invention provides a kind of 3D information measurement/acquisition system.Such as figureShown in 3, specifically include: track 101, image collecting device 201, image processing apparatus 100, mechanical mobile device 102, image is adoptedAcquisition means 201 are mounted on mechanical mobile device 102, and mechanical mobile device 102 can be moved along track 101, so that figureAs the pickup area of acquisition device 201 constantly changes, formd on the scale of a period of time in the multiple of space different locationPickup area constitutes acquisition matrix, but in only one pickup area of some moment, therefore acquisition matrix is " virtual ".Since image collecting device 201 is usually made of camera, also referred to as virtual camera matrix.But image collecting device 201 can also be withFor video camera, CCD, CMOS, camera, the mobile phone with image collecting function, plate and other electronic equipments.
The matrix dot of above-mentioned virtual matrix determines by the position of image collecting device 201 when acquisition target object image, phaseAdjacent two positions at least meet following condition:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<1.5;
Wherein L is the distance that image collecting device 201 arrives object, and usually image collecting device 201 is in first positionWhen distance apart from object face collected region, m is coefficient.
H is object actual size in acquired image, and image is usually image collecting device 201 at first positionThe picture of shooting, the object in the picture has true geometric dimension (not being the size in picture), when measuring the sizeAlong the orientation measurement of first position to the second position.Such as first position and the second position are the relationships moved horizontally, thenThe size is measured along the horizontal cross of object.Such as the object left end that can show that in picture is A, right end isB then measures the linear distance of A to B on object, is H.Measurement method can be according to A, B distance in picture, combining camera camera lensFocal length carries out actual distance calculation, and A, B can also be identified on object, directly measures AB straight line using other measurement meansDistance.
A is two neighboring location drawing picture acquisition device optical axis included angle.
M is coefficient
Since article size, concave-convex situation are different, the value of a can not be limited with strict formula, needs rule of thumb to carry outIt limits.According to many experiments, the value of m preferably can be within 0.8 within 1.5.Specific experiment data are referring to such asLower table:
After object and image collecting device 201 determine, the value of a can be calculated according to above-mentioned empirical equation, according to aValue is that can determine the parameter of virtual matrix, i.e. positional relationship between matrix dot.
In general, virtual matrix is one-dimensional matrix, such as along the multiple matrix dots of horizontal direction arrangement (acquisition positionIt sets).But when some target objects are larger, two-dimensional matrix is needed, then two adjacent in vertical direction positions equally meetAbove-mentioned a value condition.
Under some cases, even from above-mentioned empirical equation, also it is not easy to determine matrix parameter (a value) under some occasions, thisWhen need to adjust matrix parameter according to experiment, experimental method is as follows: prediction matrix parameter a is calculated according to above-mentioned formula, and according toMatrix parameter control camera is moved to corresponding matrix dot, such as camera shoots picture P1 in position W1, after being moved to position W2Picture P2 is shot, whether in picture P1 and picture P2 have the part that indicates object the same area, i.e. P1 ∩ P2 is non-if comparing at this timeEmpty (such as simultaneously including human eye angle part, but photograph taking angle is different), if readjusting a value without if, re-movesTo position W2 ', above-mentioned comparison step is repeated.If P1 ∩ P2 non-empty, phase is continued to move to according to a value (adjustment or unadjusted)Machine shoots picture P3, comparing whether to have in picture P1, picture P2 and picture P3 again indicates object the same area to the position W3Part, i.e. P1 ∩ P2 ∩ P3 non-empty please refers to Fig. 4.It recycles plurality of pictures to synthesize 3D, tests 3D synthetic effect, meet 3DInformation collection and measurement request.That is, the structure of matrix is by image collecting device 201 when acquisition multiple imagesWhat position determined, it is at least same in the presence of expression object that adjacent three positions meet three images acquired on corresponding positionThe part in region.
After virtual matrix obtains multiple target object images, the above-mentioned image of image processing apparatus processing synthesizes 3D.It utilizesThe multiple images synthesis 3D point cloud or image of multiple angles of camera shooting can be used and carry out figure according to adjacent image characteristic pointAs the method for splicing, other methods also can be used.
The method of image mosaic includes:
(1) multiple images are handled, extracts respective characteristic point;The feature of respective characteristic point can in multiple imagesTo be retouched using SIFT (Scale-Invariant Feature Transform, scale invariant feature conversion) Feature DescriptorIt states.SIFT feature description has 128 feature description vectors, and the 128 of any characteristic point can be described on direction and scaleThe feature of a aspect significantly improves the precision to feature description, while Feature Descriptor has independence spatially.
(2) characteristic point of the multiple images based on extraction, feature point cloud data and the iris for generating face characteristic respectively are specialThe feature point cloud data of sign.It specifically includes:
(2-1) carries out the spy of plurality of pictures according to the feature of the respective characteristic point of each image in the multiple images of extractionThe matching for levying point, establishes matched facial feature points data set;According to the respective feature of each image in the multiple images of extractionThe feature of point, carries out the matching of the characteristic point of plurality of pictures, establishes matched iris feature point data collection;
(2-2) according to the optical information of camera, obtain multiple images when camera different location, calculate each position phaseRelative position of the machine relative to characteristic point spatially, and the space of the characteristic point in multiple images is calculated depending on the relative positionDepth information.Similarly, the spatial depth information of the characteristic point in multiple images can be calculated.Bundle adjustment can be used in calculatingMethod.
The spatial depth information for calculating characteristic point may include: spatial positional information and colouring information, that is, can be featurePoint is in the X axis coordinate of spatial position, characteristic point in the Y axis coordinate of spatial position, characteristic point in the Z axis coordinate of spatial position, spyLevy the channel B of the colouring information of the value in the channel R of the colouring information of point, the value in the channel G of the colouring information of characteristic point, characteristic pointValue, the value in the channel Alpha of colouring information of characteristic point etc..In this way, containing feature in the feature point cloud data generatedThe spatial positional information and colouring information of point, the format of feature point cloud data can be as follows:
X1 Y1 Z1 R1 G1 B1 A1
X2 Y2 Z2 R2 G2 B2 A2
……
Xn Yn Zn Rn Gn Bn An
Wherein, Xn indicates characteristic point in the X axis coordinate of spatial position;Yn indicates characteristic point in the Y axis coordinate of spatial position;Zn indicates characteristic point in the Z axis coordinate of spatial position;Rn indicates the value in the channel R of the colouring information of characteristic point;Gn indicates featureThe value in the channel G of the colouring information of point;Bn indicates the value of the channel B of the colouring information of characteristic point;The color of An expression characteristic pointThe value in the channel Alpha of information.
(2-3) generates object according to the spatial depth information of multiple images matched characteristic point data collection and characteristic pointThe feature point cloud data of feature.
(2-4) constructs object 3D model according to feature point cloud data, to realize the acquisition of object point cloud data.
Collected object color, texture are attached on point cloud data by (2-5), form object 3D rendering.
Wherein it is possible to 3D rendering is synthesized using all images in one group of image, it can also be higher from wherein selection qualityImage synthesized.
Above-mentioned joining method is limited citing, however it is not limited to which this, several with good grounds multi-angle two dimensional images of institute generate threeThe method of dimension image can be used.
Embodiment 3 (single-shaft-rotation iris capturing)
Small range, small depth targets object 3 are smaller compared with camera acquisition range for lateral dimension, and along camera depth of field directionSize is smaller, i.e., object 3 is less in depth direction information.Under this application, although passing through the side such as track, mechanical armThe single camera system that formula moves on a large scale can equally acquire 3 multi-angle image of object to synthesize 3D point cloud or image, butThese equipment are complex, so that reliability reduces.And significantly movement causes acquisition time to extend.And due toVolume is larger, can not be suitable for many occasions (such as access control system).
And small range, small depth targets object 3 have the characteristics that oneself is peculiar, it is required that acquisition/measuring device volume it is small, canIt is high by property, acquisition speed is fast, especially it requires acquisition range that lower (object 3 of big depth then needs large range ofAcquisition, all information can be acquired by being in different location in particular for camera).Applicant be put forward for the first time the application andOccasion, and be the 3D point cloud and Image Acquisition for realizing object 3 with most succinct rotating device for its feature, it makes full use ofThe object 3 requires acquisition range small feature.
3D information acquisition system includes: image collecting device 201, for passing through the pickup area of image collecting device 2013 one groups of images of object are acquired with 3 relative motion of object;The mobile dress of pickup area, for driving image collecting device 201Pickup area and object 3 generate relative motion;Pickup area mobile device is turning gear, so that image collecting device 201Along a central axis rotation;
Referring to Fig. 5-Figure 10, image collecting device 201 is a camera, and camera passes through the camera that is fixedly mounted on turn seatOn fixed frame, rotary shaft 202 is connected under turn seat, rotary shaft 202 is controlled by shaft driving device 203 and rotated, shaft drivingDevice 203 and camera are all connected with controlling terminal 4, and for controlling, shaft driving device 203 implements driving to controlling terminal 4 and camera is clappedIt takes the photograph.In addition, rotary shaft 201 can also be directly fixedly connected with image collecting device 201, camera rotation is driven.
Due to different from traditional 3D acquisition, the implementation goal object 3 of the application belongs to small-scale 3D object.Therefore, nothingTarget need to be reappeared on a large scale, but high-precision acquisition, measurement and comparison need to be carried out to its surface main feature, that is, be measuredRequired precision is high.Camera rotational angle does not need accurate control that is excessive, but needing to guarantee rotational angle.Invention is by drivingAngle acquisition device is set in rotary shaft 202 and/or turn seat, shaft driving device 203 drive rotary shaft 202, camera according toThe degree of setting rotates, and angle acquisition device measures degree of rotation and the degree by measurement feedback to controlling terminal 4, with settingNumber is compared, and guarantees rotation precision.Shaft driving device 203 drives rotary shaft 202 to turn over two or more angles, and camera existsThe shooting for circumferentially rotating and completing different angle under the drive of turn seat around central axis, by the figure of the shooting of different angleAs being sent to controlling terminal 4, terminal log generates final 3-D image according to being handled.It is single that processing can also be sent toMember realizes the synthesis (specific synthetic method see below image split-joint method) of 3D, and processing unit can be self-contained unit, can alsoThink and other devices with processing function, or remote equipment.Wherein, camera can also connect image preprocessing listMember pre-processes image.Object 3 is face, guarantees object 3 in the pickup area of shooting in camera rotation processIt is interior.
Controlling terminal 4 is chosen as processor, computer, remote control center etc..
Image collecting device 201 could alternatively be video camera, CCD, other image acquisition devices such as infrared camera.MeanwhileImage collecting device 201 can be with integral installation on bracket, such as tripod, fixed platform etc..
Shaft driving device 203 is chosen as brushless motor, high-accuracy stepper motor, angular encoder, rotating electric machine etc..
Referring to Fig. 6, rotary shaft 202 is located at 201 lower section of image collecting device, rotary shaft 202 and image collecting device 201It is directly connected to, central axis intersects with image collecting device 201 at this time;Central axis shown in Fig. 7 is located at image collecting device 201The camera lens side of camera is provided with rotation at this point, camera is around center axis rotation and is shot between rotary shaft 202 and turn seatTurn linking arm;Central axis shown in Fig. 8 is located at the reversed side of camera lens of the camera of image collecting device 201, at this point, camera aroundCenter axis rotation is simultaneously shot, and rotation link arm is provided between rotary shaft 202 and turn seat, and can according to need will evenIt connects arm and is set as that there is curved structure upward or downward;Central axis shown in Fig. 9 is located at the mirror of the camera of image collecting device 201Reversed side, and central axis be it is horizontally disposed, which allows camera to carry out angular transformation in vertical direction, can fitThere should be the object 3 of special characteristic to shoot in vertical direction, wherein shaft driving device 203 drives rotary shaft 202 to rotate, bandMovable pendulum moves linking arm and moves up and down;Shaft driving device 203 shown in Fig. 10 further includes lifting device 204 and goes up and down for controllingThe lifting drive 205 that device 204 moves, lifting drive 205 are connect with controlling terminal 4, increase 3D acquisition of informationThe shooting area range of device.
The 3D information acquisition device occupies little space, and the system of shooting efficiency mobile camera a wide range of compared with needs obviously mentionsHeight, especially suitable for small range, the application scenarios of small depth targets high-precision 3D acquisition of information.
Embodiment 4 (light deflection iris capturing)
Referring to Figure 11-13,3D information acquisition system includes: image collecting device 201, for passing through image collecting device201 pickup area and 3 relative motion of object acquire 3 one groups of images of object;Pickup area mobile device, for driving figureAs the pickup area and object 3 of acquisition device 201 generate relative motion;Pickup area mobile device is optical scanner,So that the pickup area and object 3 of image collecting device 201 produce in the case that image collecting device 201 is not moved or rotatedRaw relative motion.
Referring to Figure 11, pickup area mobile device further includes light deflection unit 211, optionally, light deflection unit 211It is driven by light deflection driving unit 212, image collecting device 201 is a camera, and camera is fixedly mounted, and physical location is not sent outChanging is not moved and is not rotated yet, make the pickup area of camera that certain variation occur by light deflection unit 211,To realize that object 3 and pickup area change, during being somebody's turn to do, light deflection unit 211 can be driven single by light deflection212 driving of member is so that the light of different directions enters image collecting device 201.Light deflection driving unit 212 can be controlThe linear motion of light deflection unit 211 or the driving device of rotation.Light deflection driving unit 212 and camera are all connected with control eventuallyEnd 4, controlling terminal 4 implement driving and camera shooting for controlling shaft driving device 203.
It can also be appreciated that the implementation goal object 3 of the application belongs to small due to different from traditional 3D acquisition techniqueThe 3D object of range.It is therefore not necessary to be reappeared on a large scale to target, but high-precision obtain need to be carried out to its surface main featureIt takes, measures and compare, i.e., measurement accuracy requires high.Therefore the displacement of light deflection unit 211 of the present invention or amount of spin are not necessarily toIt is excessive, but need to guarantee the requirement of precision and object 3 in coverage.Invention on light deflection unit 211 by settingAngle acquisition device and/or displacement acquisition device are set, when light deflection driving unit 212 drives light deflection unit 211 to moveWhen, angle acquisition device and/or displacement acquisition device measure degree of rotation and/or straight-line displacement amount and give measurement feedbackControlling terminal 4 is compared with preset parameter, guarantees precision.When light deflection driving unit 212 drives light deflectionWhen unit 211 is rotated and/or is displaced, camera correspond to light deflection unit 211 different location state complete two orThe image of two or more shootings is sent to controlling terminal 4 by multiple shootings, and terminal log generates final according to being handled3-D image.Wherein, camera can also connect image pre-processing unit, pre-process to image.
Controlling terminal 4 is chosen as processor, computer, remote control center etc..
Image collecting device 201 could alternatively be video camera, CCD, other image acquisition devices such as infrared camera.MeanwhileImage collecting device 201 is fixed on mounting platform, and position fixation does not change.
Light deflection driving unit 212 is chosen as brushless motor, high-accuracy stepper motor, angular encoder, rotating electric machine etc..
Referring to Figure 11, light deflection unit 211 is reflecting mirror, it is to be understood that is needed to can be set one according to measurementOne or more can be correspondingly arranged in a or multiple reflecting mirrors, light deflection driving unit 212, and controls plane mirror angle hairChanging is so that the light of different directions enters image collecting device 201;Light deflection unit 211 shown in Figure 12 is lensGroup, the lens in lens group may be configured as one or more, and light deflection driving unit 212 can correspondingly be arranged one or moreIt is a, and control lens angle and change so that the light of different directions enters image collecting device 201;Light shown in Figure 13 is inclinedTurning unit 211 includes multiple surface rotating mirror.
In addition, light deflection unit 211 can be DMD, i.e., it can control the deflection side of DMD reflecting mirror using electric signalTo so that the light of different directions enters image collecting device 201.And since DMD size is very small, can showIt lands and reduces the size of whole equipment, and since DMD can greatly improve measurement and acquisition speed with high-speed rotationDegree.This is also one of inventive point of the invention.
Although realizing camera rotation and light deflection simultaneously it is appreciated that above-mentioned two embodiment is separately writeIt is possible.
3D information measurement apparatus including 3D information acquisition device, wherein 3D information acquisition device obtains 3D information, will believeBreath is sent to controlling terminal 4, and the information of 4 pairs of controlling terminal acquisitions, which calculate analyzing, obtains whole characteristic points on object 3Space coordinate.Including, 3D information image splicing module, 3D information pre-processing module, 3D information algorithms selection module, 3D letterCease computing module, space coordinate point 3D information reconstruction module.The data that above-mentioned module is used to obtain 3D information acquisition device intoRow calculation processing simultaneously generates measurement result, and wherein measurement result can be 3D point cloud image.Measurement include length, profile, area,The geometric parameters such as volume.
3D information comparison device including 3D information acquisition device, wherein 3D information acquisition device obtains 3D information, will believeBreath is sent to controlling terminal 4, and the information of 4 pairs of controlling terminal acquisitions, which calculate analyzing, obtains whole characteristic points on object 3Space coordinate, and be compared with preset value, judge the state of measured target.Except the module in aforementioned 3D information measurement apparatusOutside, 3D information comparison device further includes default 3D information extraction modules, information comparison module, comparison result output module and promptModule.Comparison device the measurement result of measured target object 3 can be compared with preset value, in order to produce result examine andIt processes again.For finding the case where measured target object 3 and preset value are significantly greater than threshold value there are deviation in comparison result, issueWarning prompt.
At least the one of the object 3 of 3D information acquisition device acquisition may be implemented in the mating object generating means of object 3The 3D information in a region generates the mating object matched with 3 corresponding region of object.Specifically, the present invention is applied to sports apparatusOr medical auxiliary apparatus production, there are individual differences for organization of human body, and therefore, unified mating object is unable to satisfy everyone needIt asks, 3D information acquisition device of the present invention obtains someone ancon image, its three-dimensional structure is inputted mating object generating means, for giving birth toProduce the elbow rest set for restoring rehabilitation convenient for its ancon.Mating object generating means can for industrial molding machine, 3D printer or itsHe is all those skilled in the art will understand that production equipment.Its 3D information acquisition device for configuring the application is fast to realizeSpeed customizes production.
Although The present invention gives above-mentioned a variety of applications (measurement compares, generation), it is to be understood that, the present invention can be onlyIt is vertical to be used as 3D information collecting device.
A kind of 3D information collecting method, comprising:
S1. in the pickup area of image collecting device 201 and 3 relative movement of object, image collecting device 1 is adoptedCollect 3 one groups of images of object;
S2 pickup area mobile device by one of the following two kinds scheme drive the pickup area of image collecting device 201 withObject 3 generates relative motion:
S21. pickup area mobile device is turning gear, so that image collecting device 201 is along a central axis rotation;
S22. pickup area mobile device is optical scanner, so that image collecting device 201 was not moved or rotatedIn the case of, the pickup area and object 3 of image collecting device 201 generate relative motion.
It can be used using the multiple images synthesis 3D point cloud or image of multiple angles of camera shooting according to adjacent imageThe method that characteristic point carries out image mosaic, also can be used other methods.
The method of image mosaic includes:
(1) multiple images are handled, extracts respective characteristic point;The feature of respective characteristic point can in multiple imagesTo be retouched using SIFT (Scale-Invariant Feature Transform, scale invariant feature conversion) Feature DescriptorIt states.SIFT feature description has 128 feature description vectors, and the 128 of any characteristic point can be described on direction and scaleThe feature of a aspect significantly improves the precision to feature description, while Feature Descriptor has independence spatially.
(2) characteristic point of the multiple images based on extraction, feature point cloud data and the iris for generating face characteristic respectively are specialThe feature point cloud data of sign.It specifically includes:
(2-1) carries out the spy of multiple images according to the feature of the respective characteristic point of each image in the multiple images of extractionThe matching for levying point, establishes matched facial feature points data set;According to the respective feature of each image in the multiple images of extractionThe feature of point, carries out the matching of the characteristic point of multiple images, establishes matched iris feature point data collection;
(2-2) according to the optical information of camera, obtain multiple images when camera different location, calculate each position phaseRelative position of the machine relative to characteristic point spatially, and the space of the characteristic point in multiple images is calculated depending on the relative positionDepth information.Similarly, the spatial depth information of the characteristic point in multiple images can be calculated.Bundle adjustment can be used in calculatingMethod.
The spatial depth information for calculating characteristic point may include: spatial positional information and colouring information, that is, can be featurePoint is in the X axis coordinate of spatial position, characteristic point in the Y axis coordinate of spatial position, characteristic point in the Z axis coordinate of spatial position, spyLevy the channel B of the colouring information of the value in the channel R of the colouring information of point, the value in the channel G of the colouring information of characteristic point, characteristic pointValue, the value in the channel Alpha of colouring information of characteristic point etc..In this way, containing feature in the feature point cloud data generatedThe spatial positional information and colouring information of point, the format of feature point cloud data can be as follows:
X1 Y1 Z1 R1 G1 B1 A1
X2 Y2 Z2 R2 G2 B2 A2
……
Xn Yn Zn Rn Gn Bn An
Wherein, Xn indicates characteristic point in the X axis coordinate of spatial position;Yn indicates characteristic point in the Y axis coordinate of spatial position;Zn indicates characteristic point in the Z axis coordinate of spatial position;Rn indicates the value in the channel R of the colouring information of characteristic point;Gn indicates featureThe value in the channel G of the colouring information of point;Bn indicates the value of the channel B of the colouring information of characteristic point;The color of An expression characteristic pointThe value in the channel Alpha of information.
(2-3) generates object 3 according to the spatial depth information of multiple images matched characteristic point data collection and characteristic pointThe feature point cloud data of feature.
(2-4) constructs object 3D model according to feature point cloud data, to realize the acquisition of 3 point cloud data of object.
Collected 3 color of object, texture are attached on point cloud data by (2-5), form object 3D rendering.
Wherein it is possible to 3D rendering is synthesized using all images in one group of image, it can also be higher from wherein selection qualityImage synthesized.
Embodiment 5
When forming matrix, it is also necessary to guarantee that the ratio of article size that camera is shot in matrix dot in picture is closedIt is suitable, and shoot apparent.So during forming matrix, camera needs to carry out zoom and focusing in matrix dot.
(1) zoom
After camera photographic subjects object, object is estimated in the ratio of camera view, and be compared with predetermined value.It is excessiveOr it is too small require carry out zoom.Zooming method can be with are as follows: using additional gearshift image collecting device 201 diameterImage collecting device 201 is moved up, allows image collecting device 201 close to or far from target object, to guaranteeEach matrix dot, object accounting holding in picture are basically unchanged.
Further include range unit, the real-time range (object distance) that image collecting device 201 arrives object can be measured.It can be by objectIt is tabulating away from, accounting, focal length triadic relation data of the object in picture, according to focal length, object accounting in pictureSize more right than determining object distance of tabling look-up, so that it is determined that matrix dot.
In some cases, change in the region of different matrix dot objects or object with respect to camera, can also pass throughFocal length is adjusted to realize that accounting of the object in picture is kept constant.
(2) auto-focusing
During forming virtual matrix, distance (object distance) h (x) of range unit real-time measurement camera to object, andMeasurement result is sent to image processing apparatus 100, image processing apparatus 100 looks into object distance-focal length table, finds corresponding focal lengthValue, Xiang Xiangji 201 issue focusing signal, and control camera ultrasonic motor driving camera lens is mobile to carry out rapid focus.In this way, can be withIn the case where the position for not adjusting image collecting device 201 does not also adjust its lens focus significantly, rapid focus is realized,Guarantee that image collecting device 201 shoots apparent.This is also one of inventive point of the invention.Certainly, in addition to distance measuring method intoRow can also focus to afocal by the way of picture contrast comparison.
Heretofore described object can be a physical objects, or multiple objects constituent.
The 3D information of the object includes 3D rendering, 3D point cloud, 3D grid, part 3D feature, 3D size and all bandsThere is the parameter of object 3D feature.
So-called 3D, three-dimensional refer to tri- directional informations of XYZ in the present invention, especially have depth information, and onlyThere is two-dimensional surface information that there is essential distinction.Also it is known as 3D, panorama, holography, three-dimensional with some, but actually only includes two-dimentional letterBreath, does not especially include that the definition of depth information has essential distinction.
Pickup area described in the present invention refers to the range that image collecting device (such as camera) can be shot.
Image collecting device in the present invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera shootingHead, mobile phone, plate, notebook, mobile terminal, wearable device, smart glasses, smart watches, Intelligent bracelet and with figureAs acquisition function all devices.
For example, the iris information acquisition system of no-reflection uses commercially available industrial camera in a kind of specific embodimentWP-UC2000, design parameter are as shown in the table:
Processor or controlling terminal use shelf computer, and such as Dell/ Dell Precision3530, design parameter is as follows:
Mechanical mobile device is using customization moving guide rail system TM-01, design parameter are as follows:
Holder: three axis holders reserve camera mechanical interface, computer control interface;
Guide rail: arc-shaped guide rail is mechanically connected with holder and cooperates;
Servo motor: brand: vertical dimension, model: 130-06025, nominal torque: 6Nm, encoder type: 2500 lines increaseAmount formula, wire length: 300cm, rated power: 1500W, voltage rating: 220V, rated current: 6A, rated speed: 2500rpm;
Control mode: it is controlled by PC control either other modes.
The 3D information for the object multiple regions that above embodiments obtain can be used for being compared, such as identityIdentification.The 3D information of human body face and iris is obtained first with the solution of the present invention, and is stored it in server, asNormal data.When in use, operating such as needing to carry out authentication and paid, opened the door, 3D acquisition device can be usedIt is compared the 3D information for acquiring and obtaining human body face and iris again with normal data, compare successfully then allow intoRow acts in next step.It is appreciated that this compare the identification that can be used for the fixtures such as antique, the art work, i.e., first obtainAntique, art work multiple regions 3D information as normal data, when needing to identify, again obtain multiple regions 3D letterBreath, and be compared with normal data, it discerns the false from the genuine.
The 3D information for the object multiple regions that above embodiments obtain can be used for designing for the object, production, makeMake mating object.For example, obtaining human body head 3D data, it can be human design, manufacture more particularly suitable cap;Obtain human body headPortion's data and eyes 3D data can be human design, the suitable glasses of manufacture.
Above embodiments obtain object 3D information can be used for the geometric dimension to the object, appearance profile intoRow measurement.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the inventionExample can be practiced without these specific details.In some instances, well known method, structure is not been shown in detailAnd technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects,Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimesIn example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protectShield the present invention claims features more more than feature expressly recited in each claim.More precisely, as followingClaims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore,Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itselfAll as a separate embodiment of the present invention.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodimentChange and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodimentMember or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement orSub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use anyCombination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosedAll process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint powerBenefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purposeIt replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments in this include institute in other embodimentsIncluding certain features rather than other feature, but the combination of the feature of different embodiment means in the scope of the present inventionWithin and form different embodiments.For example, in detail in the claims, the one of any of embodiment claimed all may be usedCome in a manner of in any combination using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processorsSoftware module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practiceMicroprocessor or digital signal processor (DSP) realize some of some or all components according to an embodiment of the present inventionOr repertoire.The present invention is also implemented as some or all equipment for executing method as described hereinOr program of device (for example, computer program and computer program product).It is such to realize that program of the invention can storeOn a computer-readable medium, it or may be in the form of one or more signals.Such signal can be from internetDownloading obtains on website, is perhaps provided on the carrier signal or is provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and abilityField technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims,Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of notElement or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple suchElement.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer realIt is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branchTo embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fameClaim.
So far, although those skilled in the art will appreciate that present invention has been shown and described in detail herein multiple showsExample property embodiment still without departing from the spirit and scope of the present invention, still can according to the present disclosure directlyDetermine or deduce out many other variations or modifications consistent with the principles of the invention.Therefore, the scope of the present invention is understood that and recognizesIt is set to and covers all such other variations or modifications.

Claims (12)

CN201811213081.8A2018-10-182018-10-18 3D information measurement system based on intelligent light sourceActiveCN109443199B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN201910862132.8ACN110567371B (en)2018-10-182018-10-18Illumination control system for 3D information acquisition
CN201811213081.8ACN109443199B (en)2018-10-182018-10-18 3D information measurement system based on intelligent light source

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201811213081.8ACN109443199B (en)2018-10-182018-10-18 3D information measurement system based on intelligent light source

Related Child Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910862132.8ADivisionCN110567371B (en)2018-10-182018-10-18Illumination control system for 3D information acquisition

Publications (2)

Publication NumberPublication Date
CN109443199Atrue CN109443199A (en)2019-03-08
CN109443199B CN109443199B (en)2019-10-22

Family

ID=65547620

Family Applications (2)

Application NumberTitlePriority DateFiling Date
CN201910862132.8AActiveCN110567371B (en)2018-10-182018-10-18Illumination control system for 3D information acquisition
CN201811213081.8AActiveCN109443199B (en)2018-10-182018-10-18 3D information measurement system based on intelligent light source

Family Applications Before (1)

Application NumberTitlePriority DateFiling Date
CN201910862132.8AActiveCN110567371B (en)2018-10-182018-10-18Illumination control system for 3D information acquisition

Country Status (1)

CountryLink
CN (2)CN110567371B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110063712A (en)*2019-04-012019-07-30王龙It is a kind of that refraction system is displaced based on the eyeglass of simulation light field using cloud
CN110986768A (en)*2019-12-122020-04-10天目爱视(北京)科技有限公司 A high-speed acquisition and measurement device for target 3D information
CN111006586A (en)*2019-12-122020-04-14天目爱视(北京)科技有限公司 An intelligent control method for 3D information collection
CN111160136A (en)*2019-12-122020-05-15天目爱视(北京)科技有限公司Standardized 3D information acquisition and measurement method and system
CN111207690A (en)*2020-02-172020-05-29天目爱视(北京)科技有限公司Adjustable iris 3D information acquisition measuring equipment
CN111770264A (en)*2020-06-042020-10-13深圳明心科技有限公司Method and device for improving imaging effect of camera module and camera module
CN112257537A (en)*2020-10-152021-01-22天目爱视(北京)科技有限公司Intelligent multi-point three-dimensional information acquisition equipment
CN113405950A (en)*2021-07-222021-09-17福建恒安集团有限公司Method for measuring diffusion degree of disposable sanitary product
CN113779668A (en)*2021-08-232021-12-10浙江工业大学 A displacement monitoring system for foundation pit enclosure

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104040287A (en)*2012-01-052014-09-10合欧米成像公司Arrangement for optical measurements and related method
CN108492358A (en)*2018-02-142018-09-04天目爱视(北京)科技有限公司A kind of 3D 4 D datas acquisition method and device based on grating
CN108492357A (en)*2018-02-142018-09-04天目爱视(北京)科技有限公司A kind of 3D 4 D datas acquisition method and device based on laser
CN108491760A (en)*2018-02-142018-09-04天目爱视(北京)科技有限公司3D four-dimension iris data acquisition methods based on light-field camera and system

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JPH0629707B2 (en)*1986-10-171994-04-20株式会社日立製作所 Optical cutting line measuring device
US5418546A (en)*1991-08-201995-05-23Mitsubishi Denki Kabushiki KaishaVisual display system and exposure control apparatus
JP3878023B2 (en)*2002-02-012007-02-07シーケーディ株式会社 3D measuring device
CN1296747C (en)*2002-12-032007-01-24中国科学院长春光学精密机械与物理研究所Scanning method of forming planar light source, planar light source and laser projection television
CN101149254B (en)*2007-11-122012-06-27北京航空航天大学High accuracy vision detection system
CN101557472B (en)*2009-04-242011-08-31华商世纪(北京)科贸发展股份有限公司Automatic image data collecting system
CN102080776B (en)*2010-11-252012-11-28天津大学Uniform illuminating source and design method based on multiband LED (light emitting diode) array and diffuse reflection surface
CN103268499B (en)*2013-01-232016-06-29北京交通大学Human body skin detection method based on multispectral imaging
CN104634277B (en)*2015-02-122018-05-15上海图漾信息科技有限公司Capture apparatus and method, three-dimension measuring system, depth computing method and equipment
JP6624911B2 (en)*2015-12-032019-12-25キヤノン株式会社 Measuring device, measuring method and article manufacturing method
CN105608734B (en)*2015-12-232018-12-14王娟A kind of image rebuilding method using three-dimensional image information acquisition device
CN106813595B (en)*2017-03-202018-08-31北京清影机器视觉技术有限公司Three-phase unit characteristic point matching method, measurement method and three-dimensional detection device
CN207037685U (en)*2017-07-112018-02-23北京中科虹霸科技有限公司One kind illuminates adjustable iris collection device
CN107389694B (en)*2017-08-282023-04-25宁夏大学 A multi-CCD camera synchronous signal acquisition device and method
CN107959802A (en)*2018-01-102018-04-24南京火眼猴信息科技有限公司Illumination light filling unit and light compensating apparatus for Tunnel testing image capture apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104040287A (en)*2012-01-052014-09-10合欧米成像公司Arrangement for optical measurements and related method
CN108492358A (en)*2018-02-142018-09-04天目爱视(北京)科技有限公司A kind of 3D 4 D datas acquisition method and device based on grating
CN108492357A (en)*2018-02-142018-09-04天目爱视(北京)科技有限公司A kind of 3D 4 D datas acquisition method and device based on laser
CN108491760A (en)*2018-02-142018-09-04天目爱视(北京)科技有限公司3D four-dimension iris data acquisition methods based on light-field camera and system

Cited By (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110063712A (en)*2019-04-012019-07-30王龙It is a kind of that refraction system is displaced based on the eyeglass of simulation light field using cloud
CN111160136B (en)*2019-12-122021-03-12天目爱视(北京)科技有限公司 A standardized 3D information acquisition and measurement method and system
CN110986768A (en)*2019-12-122020-04-10天目爱视(北京)科技有限公司 A high-speed acquisition and measurement device for target 3D information
CN111006586A (en)*2019-12-122020-04-14天目爱视(北京)科技有限公司 An intelligent control method for 3D information collection
CN111160136A (en)*2019-12-122020-05-15天目爱视(北京)科技有限公司Standardized 3D information acquisition and measurement method and system
CN113065502A (en)*2019-12-122021-07-02天目爱视(北京)科技有限公司3D information acquisition system based on standardized setting
CN111006586B (en)*2019-12-122020-07-24天目爱视(北京)科技有限公司 An intelligent control method for 3D information collection
CN111207690B (en)*2020-02-172021-03-12天目爱视(北京)科技有限公司Adjustable iris 3D information acquisition measuring equipment
CN111207690A (en)*2020-02-172020-05-29天目爱视(北京)科技有限公司Adjustable iris 3D information acquisition measuring equipment
CN111770264A (en)*2020-06-042020-10-13深圳明心科技有限公司Method and device for improving imaging effect of camera module and camera module
CN111770264B (en)*2020-06-042022-04-08深圳明心科技有限公司Method and device for improving imaging effect of camera module and camera module
CN112257537A (en)*2020-10-152021-01-22天目爱视(北京)科技有限公司Intelligent multi-point three-dimensional information acquisition equipment
CN113405950A (en)*2021-07-222021-09-17福建恒安集团有限公司Method for measuring diffusion degree of disposable sanitary product
CN113779668A (en)*2021-08-232021-12-10浙江工业大学 A displacement monitoring system for foundation pit enclosure
CN113779668B (en)*2021-08-232023-05-23浙江工业大学Foundation pit support structure displacement monitoring system

Also Published As

Publication numberPublication date
CN110567371B (en)2021-11-16
CN110567371A (en)2019-12-13
CN109443199B (en)2019-10-22

Similar Documents

PublicationPublication DateTitle
CN109443199A (en) 3D information measurement system based on intelligent light source
CN109394168B (en)A kind of iris information measuring system based on light control
CN109218702B (en) A camera rotation type 3D measurement and information acquisition device
CN110543871B (en)Point cloud-based 3D comparison measurement method
CN110567370B (en)Variable-focus self-adaptive 3D information acquisition method
CN109146961A (en) A 3D measurement and acquisition device based on virtual matrix
CN111060023A (en)High-precision 3D information acquisition equipment and method
CN109661687A (en)Fixed distance virtual and augmented reality systems and methods
CN208653401U (en)Adapting to image acquires equipment, 3D information comparison device, mating object generating means
CN109285109B (en) A multi-region 3D measurement and information acquisition device
CN208795174U (en)Camera rotation type image capture device, comparison device, mating object generating means
CN209279885U (en)Image capture device, 3D information comparison and mating object generating means
CN108492357A (en)A kind of 3D 4 D datas acquisition method and device based on laser
CN111780682A (en)3D image acquisition control method based on servo system
US11882354B2 (en)System for acquisiting iris image for enlarging iris acquisition range
CN109146949B (en)A kind of 3D measurement and information acquisition device based on video data
CN208653473U (en)Image capture device, 3D information comparison device, mating object generating means
CN208795167U (en)Illumination system for 3D information acquisition system
CN109394170A (en) A Reflective Iris Information Measuring System
CN109084679A (en) A 3D measurement and acquisition device based on a spatial light modulator
CN209103318U (en)A kind of iris shape measurement system based on illumination
CN209203221U (en)A kind of iris dimensions measuring system and information acquisition system based on light control
CN111207690B (en)Adjustable iris 3D information acquisition measuring equipment
CN213072921U (en)Multi-region image acquisition equipment, 3D information comparison and matching object generation device
CN211375622U (en)High-precision iris 3D information acquisition equipment and iris recognition equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp