Man-machine interactive iris image automatic acquisition deviceTechnical Field
The invention relates to the technical field of biological feature recognition. And more particularly, to a human-computer interactive iris image automatic acquisition device.
Background
The iris identification technology has the remarkable advantages of high accuracy, good safety, high identification speed and the like in the field of biological identification. In the iris recognition system, the acquisition of an iris image is the foremost link of the whole system and is also an important link with great difficulty.
In the current iris image acquisition device, the position of eyes of a user is determined by a cylindrical shell, and when the user is tightly attached to the shell, an internal imaging module shoots the iris of the user. Such a collection device requires the user to be actively contacted with the device each time, requiring a large degree of cooperation, and many users may have hygiene risks when using one device at the same time.
In the non-contact iris recognition device, a cold light mirror is often used and placed in front of a lens, and when a user can see his eyes in the cold light mirror, the iris area is considered to be opposite to the lens. However, the vision of the left and right eyes of each person may be different, and the sizes of the fields of vision may be different, so that the user with large difference in the vision of the left and right eyes feels that the actual position of the user is deviated from the lens when the user is aligned with the cold mirror.
Therefore, it is desirable to provide a human-computer interactive automatic iris image acquisition device.
Disclosure of Invention
The invention aims to provide a human-computer interactive automatic iris image acquisition device, which adopts a non-contact acquisition device and effectively feeds back acquired eye region information to a user in real time in the acquisition process, thereby reducing the hidden health trouble, reducing the matching difficulty of the user and improving the use experience of the user.
In order to achieve the purpose, the invention adopts the following technical scheme:
a human-computer interactive iris image automatic acquisition device comprises: the device comprises a face image acquisition unit, a distance measurement unit, a data processor unit, an iris image acquisition unit, an illumination unit, a man-machine interaction unit and a pitching unit, wherein the face image acquisition unit, the distance measurement unit, the data processor unit, the iris image acquisition unit, the illumination unit and the man-machine interaction unit are arranged in a shell;
the face image acquisition unit acquires a face image;
the distance measuring unit is used for measuring the distance from the face of the user to the face image acquisition unit and generating distance information;
the data processing unit generates a first human-computer interaction instruction according to the face image, and generates a second human-computer interaction instruction according to the distance information or generates a pitching control instruction and a lighting control instruction according to the distance information;
an iris image acquisition unit for acquiring an iris image;
the illumination unit illuminates the image acquisition area of the iris image acquisition unit according to the illumination control instruction;
the data processing unit also generates a third human-computer interaction instruction according to the iris image;
the human-computer interaction unit sends out a position prompt aligned with the human face image acquisition unit according to the first human-computer interaction instruction, sends out a position prompt close to or far away from the human face image acquisition unit according to the second human-computer interaction instruction, and sends out a position prompt moving left and right according to the third human-computer interaction instruction;
and the pitching unit drives the shell to adjust the pitching angle according to the pitching control instruction.
Preferably, the ranging unit is an ultrasonic range finder or a laser range finder.
Preferably, the iris image collecting unit includes: the zoom lens comprises an infrared light filter, a zoom lens and an infrared light image sensor which are coaxially arranged.
Preferably, the lighting unit includes: the infrared light source consists of a plurality of light emitting diodes, and the light source driver provides corresponding output current according to the illumination control instruction so that the infrared light source emits light rays with proper intensity.
Preferably, the human-computer interaction unit comprises: the liquid crystal display screen displays corresponding position prompts in a character or image form according to different human-computer interaction instructions, and the voice module sends out corresponding position prompts in a voice form according to different human-computer interaction instructions.
Preferably, the data processor unit comprises a data processor and an interface module, and the interface module is one or any combination of the following interface types: USB interface, net twine interface, serial ports.
Preferably, the pitching unit comprises: the motor is a stepping motor, and the motor controls the pitching mechanism to drive the shell to adjust the pitching angle according to the pitching control instruction.
The invention has the following beneficial effects:
1. the visible light image sensor and the processor module are adopted to position the human face, so that the motor is controlled to drive the pitching module to adjust the pitching angle to adapt to the height of a user, and excessive cooperation of the user is not needed.
2. A prompt is given by a liquid crystal display screen, or a human eye image shot by an infrared light image sensor is displayed in real time so that a user can conveniently know the position of the eyes of the user, man-machine interaction on a picture is realized, and the user can conveniently adjust the position to align the lens.
3. The distance between the user and the device is judged by adopting the distance measuring module and the processor module, and then the user is prompted by the voice module, so that the man-machine interaction on sound is realized, and the user can conveniently adjust the position to enter the recognizable distance.
4. The iris of the user is shot by the zoom lens and the infrared light image sensor, and the iris shooting device has a larger recognizable range compared with a device adopting a fixed-focus lens.
Drawings
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
Fig. 1 shows a block diagram of a human-computer interactive iris image automatic acquisition device.
Fig. 2 shows a schematic diagram of a human-computer interaction panel of the human-computer interaction type iris image automatic acquisition device.
Fig. 3 shows an internal structure diagram of the human-computer interactive iris image automatic acquisition device.
Fig. 4 shows a working flow chart of the man-machine interactive iris image automatic acquisition device.
Detailed Description
In order to more clearly illustrate the invention, the invention is further described below with reference to preferred embodiments and the accompanying drawings. Similar parts in the figures are denoted by the same reference numerals. It is to be understood by persons skilled in the art that the following detailed description is illustrative and not restrictive, and is not to be taken as limiting the scope of the invention.
As shown in fig. 1, the human-computer interactive iris image automatic acquisition device provided in this embodiment includes: the human face image acquisition unit 2, the distance measurement unit 3, the data processor unit 4, the iris image acquisition unit 5, the illumination unit 6, the human-computer interaction unit 7 and the pitching unit 8 are arranged outside the shell 1 and connected with the shell 1;
the face image acquisition unit 2 acquires a face image;
the distance measuring unit 3 is used for measuring the distance from the face of the user to the face image acquisition unit 2 and generating distance information;
the data processing unit 4 generates a first human-computer interaction instruction according to the face image, and generates a second human-computer interaction instruction according to the distance information or generates a pitching control instruction and a lighting control instruction according to the distance information;
an iris image acquisition unit 5 for acquiring an iris image;
the illumination unit 6 illuminates the image acquisition area of the iris image acquisition unit 5 according to the illumination control instruction;
the data processing unit 4 also generates a third human-computer interaction instruction according to the iris image;
the human-computer interaction unit 7 is used for sending a position prompt for aligning the human face image acquisition unit 2 according to a first human-computer interaction instruction, sending a position prompt close to or far away from the human face image acquisition unit 2 according to a second human-computer interaction instruction, and sending a position prompt for moving left and right according to a third human-computer interaction instruction;
and the pitching unit 8 drives the shell 1 to adjust the pitching angle according to the pitching control instruction.
Wherein,
the face image acquisition unit 2 includes: the wide-angle lens 9 and the visible light image sensor 10 are coaxially arranged, the focal length of the wide-angle lens 9 is within the range of 8-12mm, and the visible light image sensor 10 is a CCD or CMOS image sensor. The start and stop of the visible light image sensor 10 are controlled by the data processing unit 4, the collected face image is transmitted to the data processing unit 4, the data processing unit 4 determines the face position through the existing face positioning algorithm, and therefore the pitching unit 8 is controlled to drive the shell 1 and all functional units in the shell 1 to adjust the pitching angle to adapt to the height of a user.
The distance measuring unit 3 is an ultrasonic distance measuring module or a laser distance measuring module. The start and stop of the distance measuring unit 3 are controlled by the data processing unit 4, the distance information is transmitted to the data processing unit 4, and the data processing unit 4 can judge the distance between the user and the face image acquisition unit 2 through the distance information.
The iris image collecting unit 5 includes: the infrared light filter 11 is an optical filter which only allows infrared light to pass through, the zoom range of the zoom lens 12 is 22-50mm, the infrared light image sensor 13 is a CCD or CMOS image sensor and the number of pixels is at least 500 ten thousand, and the infrared light image sensor 13 is controlled by the data processing unit 4 and transmits the shot iris image to the data processing unit 4.
The illumination unit 5 includes: the infrared light source 14 is composed of a plurality of Light Emitting Diodes (LEDs) with peak wavelengths of 850-.
The human-computer interaction unit 7 includes: the liquid crystal display screen 16 and the voice module 17, the liquid crystal display screen 16 is composed of two screens with the resolution not less than 640 multiplied by 480 pixels in a bilateral symmetry mode. Under the control of the data processing unit 4, the liquid crystal display 16 displays corresponding position prompts to the user in the form of characters or images according to different human-computer interaction instructions, including displaying human eye images captured by the infrared light image sensor in real time so that the user can know the position of the eyes (i.e., position prompt for moving left and right, and the user performs left shift or right shift according to the position of the eyes to align the infrared light image sensor 13). The voice module 17 issues a corresponding voice-form position prompt (prompt) under the control of the data processing unit 4, i.e. according to different human-computer interaction instructions.
The data processor unit 4 includes a data processor, and processes and operates data such as various images (face images, iris images), information (distance information) and the like to generate various instructions (a first human-computer interaction instruction, a second human-computer interaction instruction, a pitching control instruction, and a lighting control instruction) respectively, and the data processor unit 4 may further optionally include an interface module 18, which is one or any combination of the following interface types: USB interface, net twine interface, serial ports. The interface module 18 transmits data within the data processor unit 4 to the external device.
The pitch unit 8 includes: the motor 19 and the pitching mechanism 20, the motor 19 is a stepping motor, under the control of the data processing unit 4, the pitching mechanism 20 is controlled according to the pitching control instruction to drive the shell 1 and all the functional units in the shell 1 to adjust the pitching angle, and the pitching mechanism 20 can drive the shell 1 and all the functional units in the shell 1 to move between-45 degrees and 45 degrees in the horizontal direction, so as to adapt to users with different heights.
The man-machine interactive automatic iris image acquisition device provided by the embodiment is further described by substituting a specific application environment.
As shown in fig. 2, fig. 2 is a schematic view of a human-computer interaction panel, which is located on one side of the human-computer interaction type iris image automatic acquisition device facing the user direction provided in this embodiment, and the infrared optical filter 11, the zoom lens 12 and the infrared optical image sensor 13 are located in the center of the human-computer interaction panel and located between two screens of the two liquid crystal display screens 16. The liquid crystal display 16 can display the image taken by the infrared light image sensor 13 in real time while separating the image from the left and right, so that the user can easily determine whether the eyes are facing the infrared light image sensor 13. The infrared light source 14 is composed of 40 Light Emitting Diodes (LEDs) with a peak wavelength of 850nm, and is distributed around the liquid crystal display, so that when human eyes face the infrared light image sensor 13, the iris area can be uniformly illuminated. The distance measuring unit 3 is positioned under the infrared light filter 11, so that the distance between the head of the user and the face image acquisition unit 2 can be conveniently measured. The wide-angle lens 9 and the visible light image sensor 10 are positioned right below the ranging unit 3, and can shoot a large field angle so as to facilitate the data processing unit 4 to determine the position of the human face. The voice module 17 is two horns respectively located at the left and right of the wide-angle lens 9 and the visible light image sensor 10.
As shown in fig. 3, the centers of the infrared optical filter 11, the zoom lens 12 and the infrared optical image sensor 13 are located on the same straight line, perpendicular to the human-computer interaction panel. The centers of the wide-angle lens 9 and the visible light image sensor 10 are positioned on the same straight line and are vertical to the man-machine interaction panel. The data processing unit 4 is located in the inner center of the housing 1. The light source driver 15 is located behind the infrared light source 14 and above the data processing unit 4. The motor 19 is located at the middle of the outer rear end of the housing 1 and is electrically connected with the pitching mechanism 20, and the pitching mechanism 20 can be installed in connection with an external space object (such as a wall).
As shown in fig. 4, the working flow of the device is as follows: when the acquisition is started, the device is initialized, then the position prompt of the face image acquisition unit 2, namely the voice prompt 1, is given to a user to please face the device, and meanwhile, the liquid crystal display screen 16 displays a character prompt corresponding to the voice prompt. Next, the visible light image sensor 13 starts to collect images, transmits the images to the data processing unit 4 for face positioning analysis, and returns to the previous step to continue to give a position prompt for the face image collecting unit 2 if the face cannot be found; if the face is found, the motor 19 is controlled according to the face position to start the pitching mechanism 20 to drive the shell 1 and all the functional units in the shell to align with the face. Next, the distance measuring unit 3 measures the distance between the user and the face image acquisition unit 2, distance information is transmitted to the data processing unit 4, the data processing unit 4 judges whether the distance between the user and the face image acquisition unit 2 is within a recognizable distance of 20-50cm, if the distance is not within the recognizable distance, the position prompt-voice prompt 2 ' please leave some distance ' is given to the user with too close distance or the position prompt-voice prompt 2 ' please get close to some distance is given to the user with too far distance, meanwhile, the liquid crystal display screen 16 displays a text prompt corresponding to the voice prompt, and the last step of measuring the distance between the user and the face image acquisition unit 2 is returned; if the distance is within the recognizable distance, the data processing unit 4 controls the zoom lens 12 to adjust to a proper focal length according to the distance information. Next, the infrared image sensor 13 starts to collect an image and transmits the image to the data processing unit 4, and the data processing unit 4 controls the liquid crystal display screen 16 to display the collected image and judges whether the eyes are shot according to the existing eye positioning algorithm. If the eyes are not found, giving a left-shift or right-shift position prompt to the user, namely a voice prompt 3, asking the user to move the position left and right to enable the eyes to appear at the center of the display screen, and returning to the previous step to continue to acquire the images by the infrared image sensor 13; if an eye is found, the data processing unit 4 extracts an iris image in the image. Judging whether the quality of the iris image is qualified or not, and returning to the zooming step of the zoom lens 13 if the quality of the iris image is not qualified so as to improve the image quality; if the iris image is qualified, the data processing unit 4 controls the interface module 18 to transmit the iris image to the iris recognition system, and meanwhile, the voice prompt 4 'iris image acquisition is successful' can be given through the voice module 17. The workflow is now complete.
It should be understood that the above-mentioned embodiments of the present invention are only examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention, and it will be obvious to those skilled in the art that other variations or modifications may be made on the basis of the above description, and all embodiments may not be exhaustive, and all obvious variations or modifications may be included within the scope of the present invention.