Movatterモバイル変換


[0]ホーム

URL:


CN111914593A - Image acquisition method and device, storage medium and electronic equipment - Google Patents

Image acquisition method and device, storage medium and electronic equipment
Download PDF

Info

Publication number
CN111914593A
CN111914593ACN201910380813.0ACN201910380813ACN111914593ACN 111914593 ACN111914593 ACN 111914593ACN 201910380813 ACN201910380813 ACN 201910380813ACN 111914593 ACN111914593 ACN 111914593A
Authority
CN
China
Prior art keywords
image
acquired
light
sensor
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910380813.0A
Other languages
Chinese (zh)
Inventor
黄建东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Harvest Intelligence Tech Co Ltd
Original Assignee
Shanghai Harvest Intelligence Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Harvest Intelligence Tech Co LtdfiledCriticalShanghai Harvest Intelligence Tech Co Ltd
Priority to CN201910380813.0ApriorityCriticalpatent/CN111914593A/en
Priority to US16/869,318prioritypatent/US11582373B2/en
Priority to TW109115214Aprioritypatent/TWI811540B/en
Publication of CN111914593ApublicationCriticalpatent/CN111914593A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

An image acquisition method and device, a storage medium and an electronic device are provided, wherein the image acquisition device comprises: the light-transmitting cover plate is provided with a first face and a second face which are opposite to each other in the thickness direction, and the first face of the light-transmitting cover plate is suitable for being in contact with an object to be collected; a light source member having a first surface and a second surface opposite to each other in a thickness direction, the first surface of the light source member being disposed toward the second surface of the light-transmitting cover plate; a sensor member provided on a second surface of the light source member; the sensor component is formed by splicing a plurality of sensor modules, and the sensor modules are distributed on the same plane. The scheme provided by the invention can realize large-area imaging in a multi-block splicing mode so as to meet the requirement of intelligent equipment on a large-size screen, has high data processing speed and is beneficial to reducing the manufacturing and maintenance cost.

Description

Image acquisition method and device, storage medium and electronic equipment
Technical Field
The invention relates to the technical field of image acquisition, in particular to an image acquisition method and device, a storage medium and electronic equipment.
Background
With the development of information technology, biometric identification technology plays an increasingly important role in ensuring information security and the like, wherein fingerprint identification has become one of the key technical means for identity identification and equipment unlocking widely applied in the field of mobile internet.
Under the trend that the screen of the intelligent device accounts for more and more, the traditional capacitive fingerprint identification technology cannot meet the requirements, and the ultrasonic fingerprint identification technology has the problems of technical maturity, cost and the like, so that the optical fingerprint identification technology is expected to become a mainstream technical scheme of fingerprint identification.
The existing optical fingerprint identification scheme is based on the imaging principle of a geometric optical lens, and the used fingerprint module comprises a micro-lens array, an optical spatial filter and other elements, so that the defects of complex structure, thick module, small sensing range, high cost and the like exist.
Compared with the existing optical fingerprint identification scheme, the non-lens optical screen lower fingerprint identification technology realized by the total reflection imaging principle of physical optics has the advantages of simple structure, thin module, large sensing range, low cost and the like.
However, with the increasing screen occupation ratio of the smart device, the proposition of the full screen device concept and the coming out of full screen products, the production capacity of the existing optical underscreen fingerprint identification device based on the total reflection principle cannot meet the full screen requirement of the smart device.
On the other hand, in order to meet the fingerprint identification requirement of the intelligent device with a high screen occupation ratio, the photosensitive area of the optical fingerprint identification device must be increased, which can lead to the rapid increase of the pixel points of the optical fingerprint identification device, and further lead to the serious reduction of the data processing speed of the optical fingerprint identification device when outputting images every time.
In addition, the increase of the photosensitive area and the pixel points also leads to the increase of the manufacturing cost of the optical fingerprint recognition device.
Disclosure of Invention
The invention solves the technical problem of how to realize large-area imaging at lower cost so as to meet the requirement of intelligent equipment on a large-size screen and simultaneously maintain higher data processing speed.
In order to solve the above technical problem, an embodiment of the present invention provides an image capturing device, including: the light-transmitting cover plate is provided with a first face and a second face which are opposite to each other in the thickness direction, and the first face of the light-transmitting cover plate is suitable for being in contact with an object to be collected; a light source member having a first surface and a second surface opposite to each other in a thickness direction, the first surface of the light source member being disposed toward the second surface of the light-transmitting cover plate; a sensor member provided on a second surface of the light source member; the sensor component is formed by splicing a plurality of sensor modules, and the sensor modules are distributed on the same plane.
Optionally, adjacent sides of adjacent sensor modules are attached to each other.
Optionally, the light source component includes a plurality of light emitting points arranged in an array, and in a plurality of collecting periods, the plurality of light emitting points sequentially emit light in an array translation manner.
Optionally, the light source component is a display panel.
Optionally, the display panel is selected from: liquid crystal display screen, active array organic light emitting diode display screen and little light emitting diode display screen.
In order to solve the above technical problem, an embodiment of the present invention further provides an image acquisition method, including: acquiring images respectively acquired by a plurality of sensor modules, wherein each image is a local image of an object to be acquired; and splicing the acquired multiple images to obtain the image of the object to be acquired.
Optionally, the distributing the multiple sensor modules on the same plane, and the stitching the multiple acquired images to obtain the image of the object to be acquired includes: for each sensor module, determining the position of the image acquired by the sensor module in the image of the object to be acquired according to the position of the sensor module on the plane; and splicing the plurality of images according to the determined positions to obtain the image of the object to be acquired.
Optionally, the plurality of sensor modules acquire an image under the irradiation of the light source component, where the light source component includes a plurality of light emitting points arranged in an array, and in a plurality of acquisition periods, the plurality of light emitting points sequentially emit light in an array translation manner; the stitching the acquired plurality of images to obtain the image of the object to be acquired further comprises: obtaining respective pieces of information in a plurality of acquisition periodsThe method comprises the steps of obtaining a plurality of images of the object to be collected, wherein the images of the object to be collected, which are obtained by splicing in a current collection period, are translated for a preset distance in a first direction relative to the images of the object to be collected, which are obtained by splicing in a previous collection period, and the first direction is parallel to the translation direction of the array; determining the image of the object to be acquired, which is obtained by splicing in each acquisition period, as an image to be processed; translating the plurality of images to be processed along a second direction so as to enable the translated plurality of images to be processedAlignment ofAnd obtaining a processed image, wherein the second direction is consistent with the first direction; and generating a final image of the object to be acquired based on the processed image.
Optionally, the generating a final image of the object to be acquired based on the processed image includes: judging whether the integrity of the processed image reaches a preset threshold value or not; when the judgment result shows that the integrity of the processed image is smaller than the preset threshold value, continuing to acquire the image in the next acquisition period and translating the acquired image to be aligned with the processed image along the second direction to obtain an updated processed image until the integrity of the updated processed image reaches the preset threshold value; and determining the updated processed image as the final image of the object to be acquired.
Optionally, the image acquisition method further includes: judging whether the integrity of the image of the object to be acquired reaches a preset threshold value or not; when the judgment result shows that the integrity of the image of the object to be acquired is smaller than the preset threshold value, determining a sensor module corresponding to a hollow area in the image of the object to be acquired; and acquiring an image acquired by the sensor module corresponding to the blank area in the next acquisition period and splicing the image to the image of the object to be acquired so as to obtain an updated image of the object to be acquired until the integrity of the updated image of the object to be acquired reaches the preset threshold value.
To solve the above technical problem, an embodiment of the present invention further provides an electronic device, including: the above-described image acquisition device; a processor coupled with the image acquisition device; a memory having stored thereon a computer program which, when being executed by the processor, carries out the steps of the image acquisition method as described above.
To solve the above technical problem, an embodiment of the present invention further provides a storage medium having stored thereon computer instructions, where the computer instructions execute the steps of the above method when executed.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following beneficial effects:
an embodiment of the present invention provides an image capturing apparatus, including: the light-transmitting cover plate is provided with a first face and a second face which are opposite to each other in the thickness direction, and the first face of the light-transmitting cover plate is suitable for being in contact with an object to be collected; a light source member having a first surface and a second surface opposite to each other in a thickness direction, the first surface of the light source member being disposed toward the second surface of the light-transmitting cover plate; a sensor member provided on a second surface of the light source member; the sensor component is formed by splicing a plurality of sensor modules, and the sensor modules are distributed on the same plane. By adopting the scheme of the embodiment, large-area imaging can be realized in a mode of splicing a plurality of small-area sensor modules, so that the requirement of intelligent equipment on a large-size screen is met, the data processing speed is high, and the reduction of manufacturing and maintenance cost is facilitated. Particularly, the mode of polylith concatenation that this embodiment adopted can effectively compensate current production capacity and can't prepare out the not enough of monoblock jumbo size sensor part, through the screen that accounts for according to the smart machine than nimble concatenation quantity of adjusting the sensor module for use this embodiment the image acquisition device of scheme can effectively satisfy the requirement of smart machine to the jumbo size screen. Further, in the prior art, only one whole sensor component is provided, all pixel points on the sensor component need to output data when the collected image is output every time, and along with the increase of the area of the sensor component and the increase of the pixel points, the disposable output mode inevitably leads to the reduction of the data processing speed. By adopting the image acquisition device adopting the scheme of the embodiment, each sensor module can be independently controlled, so that each sensor module can simultaneously and parallelly output data, and the data processing speed of the image acquisition device is effectively improved. Further, since the prior art has only a one-piece sensor part, the replacement of the one-piece sensor part is required even if the sensor part is damaged only in a partial region, which undoubtedly increases the manufacturing and maintenance costs of the image pickup device. By adopting the image acquisition device adopting the scheme of the embodiment, because the sensor modules are spliced, when a local area is damaged, only the damaged sensor module can be replaced, and therefore the manufacturing and maintenance costs of the image acquisition device are effectively reduced.
Further, an embodiment of the present invention further provides an image acquisition method, including: acquiring images respectively acquired by a plurality of sensor modules, wherein each image is a local image of an object to be acquired; and splicing the acquired multiple images to obtain the image of the object to be acquired. Therefore, the images acquired by the plurality of sensor modules which are spliced can be accurately synthesized into a complete image of the object to be acquired, and image distortion or local deletion is avoided.
Drawings
FIG. 1 is a schematic diagram of an image capture device according to an embodiment of the present invention;
FIG. 2 is a schematic view of the sensor component of FIG. 1;
FIG. 3 is a flow chart of a method of image acquisition according to an embodiment of the present invention;
FIG. 4 is a flowchart of one embodiment of step S102 of FIG. 3;
FIG. 5 is a flow diagram of another embodiment of step S102 of FIG. 3;
FIG. 6 is a flowchart of one embodiment of step S1026 of FIG. 5;
FIG. 7 is a flow chart of a variation of the image acquisition method shown in FIG. 3;
fig. 8 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
As background art shows, the existing optical underscreen fingerprint identification device has many defects, cannot meet the increasing screen duty requirement of intelligent equipment, and has high manufacturing and maintenance cost and slower data acquisition and processing speed.
In order to solve the above technical problem, an embodiment of the present invention provides an image capturing device, including: the light-transmitting cover plate is provided with a first face and a second face which are opposite to each other in the thickness direction, and the first face of the light-transmitting cover plate is suitable for being in contact with an object to be collected; a light source member having a first surface and a second surface opposite to each other in a thickness direction, the first surface of the light source member being disposed toward the second surface of the light-transmitting cover plate; a sensor member provided on a second surface of the light source member; the sensor component is formed by splicing a plurality of sensor modules, and the sensor modules are distributed on the same plane.
By adopting the scheme of the embodiment, large-area imaging can be realized in a mode that a plurality of small-area sensor modules are spliced together, so that the requirement of intelligent equipment on a large-size screen is met, the data processing speed is high, and the reduction of manufacturing and maintenance cost is facilitated.
Particularly, the mode of polylith concatenation that this embodiment adopted can effectively compensate current production capacity and can't prepare out the not enough of monoblock jumbo size sensor part, through the screen that accounts for according to the smart machine than nimble concatenation quantity of adjusting the sensor module for use this embodiment the image acquisition device of scheme can effectively satisfy the requirement of smart machine to the jumbo size screen.
Further, in the prior art, only one whole sensor component is provided, all pixel points on the sensor component need to output data when the collected image is output every time, and along with the increase of the area of the sensor component and the increase of the pixel points, the disposable output mode inevitably leads to the reduction of the data processing speed. By adopting the image acquisition device adopting the scheme of the embodiment, each sensor module can be independently controlled, so that each sensor module can simultaneously and parallelly output data, and the data processing speed of the image acquisition device is effectively improved.
Further, since the prior art has only a one-piece sensor part, the replacement of the one-piece sensor part is required even if the sensor part is damaged only in a partial region, which undoubtedly increases the manufacturing and maintenance costs of the image pickup device. By adopting the image acquisition device adopting the scheme of the embodiment, because the sensor modules are spliced, when a local area is damaged, only the damaged sensor module can be replaced, and therefore the manufacturing and maintenance costs of the image acquisition device are effectively reduced.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Fig. 1 is a schematic diagram of an image capturing device according to an embodiment of the present invention. The image capturingdevice 100 may be an optical underscreen image capturing device, such as an optical underscreen fingerprint capturing device based on the optical total reflection principle.
Theimage capturing device 100 may be adapted to capture an image of an object to be captured, which may be a finger, and which may be a fingerprint image.
Specifically, referring to fig. 1, theimage capturing apparatus 100 may include: a light-transmissive cover plate 110, said light-transmissive cover plate 110 having opposite first andsecond faces 110a, 110b in a thickness direction (as shown in the z-direction), saidfirst face 110a of said light-transmissive cover plate 110 being adapted to be in contact with an object to be acquired; alight source section 120, thelight source section 120 having afirst surface 120a and asecond surface 120b opposite to each other in a thickness direction (a z direction as illustrated), thefirst surface 120a of thelight source section 120 being disposed toward thesecond surface 110b of the light-transmissive cover 110; asensor part 130, thesensor part 130 may be disposed on thesecond surface 120b of thelight source part 120.
For example, thesensor part 130 may be attached to thesecond surface 120b of thelight source part 120, and may be bonded thereto by an optical adhesive.
Alternatively, thesensor part 130 and thesecond face 120b of thelight source part 120 may have a gap therebetween.
Further, thesensor component 130 may be formed by splicing a plurality ofsensor modules 131, and the plurality ofsensor modules 131 are distributed on the same plane, wherein the plane may be parallel to thesecond surface 120b of thelight source component 120, that is, the plane may be perpendicular to the z direction.
For example, thesensor component 130 may include 4sensor modules 131, and the splicing effect of the 4sensor modules 131 on the plane is shown in fig. 2.
In one embodiment, thesensor module 131 may be circular, rectangular, polygonal, etc. in shape.
In one embodiment, thesensor unit 130 includes a plurality ofsensor modules 131 each having an equal shape and area. For example, fig. 2 shows 4sensor modules 131 that are all rectangular and equal in area.
In one variation, thesensor assembly 130 may include one ormore sensor modules 131 of the plurality ofsensor modules 131 that have different shapes and/or areas thanother sensor modules 131 to flexibly accommodate different screen-to-screen requirements of the smart device.
In one embodiment, thesensor assembly 130 includes a plurality ofsensor modules 131 that may be arrayed and spliced into a rectangular shape as shown in FIG. 2.
In practical applications, the adjustment may also be performed adaptively according to the screen shape of the smart device, for example, the plurality ofsensor modules 131 may be spliced into outer contours such as a polygon, a triangle, and a circle.
In one embodiment, the number ofsensor modules 131 may be determined according to the shape and area of the light-transmissive cover plate 110 and the shape and area of eachsensor module 131.
In one embodiment, to ensure that the image of the object to be captured can be captured completely, adjacent sides ofadjacent sensor modules 131 may be abutted.
For example, referring to fig. 1 and 2,adjacent sensor modules 131 may fit closely to ensure that no gap exists between the two.
Further, theadjacent sensor modules 131 may be bonded by an optical adhesive or the like to obtain a better bonding effect.
In one embodiment, thelight source unit 120 may include a plurality of light emitting points O arranged in an array, and the plurality of light emitting points O sequentially emit light in a manner of array translation in a plurality of collecting periods.
For convenience of description, the light emitting points that emit light simultaneously and are located on the same row in the example shown in fig. 2 are collectively referred to as light emitting points Oi, where i is a positive integer. In fig. 2, the light-emitting point O that emits light is represented in the form of a solid circle, and the light-emitting point O that does not emit light is represented in the form of a hollow circle.
For example, the plurality of light-emitting points O sequentially emitting light in an array-shifted manner may mean that light is emitted line by line from the light-emitting point Oi until the light-emitting point O1 emits light.
For another example, referring to fig. 2, in order to improve the image capturing efficiency, the plurality of light-emitting points O sequentially emit light in an array-shifting manner may mean that two spaced rows of light-emitting points emit light simultaneously from the light-emitting point O1, two spaced rows of light-emitting points emit light simultaneously from the light-emitting point O2 in the next capturing period, and so on until two spaced rows of light-emitting points emit light simultaneously from the light-emitting point Oi.
As another example, the direction of translation of the array may also be along a diagonal of the illustratedrectangular sensor module 131. Therefore, the light-emitting points O on two sides which are perpendicular to each other can emit light simultaneously, and the omnidirectional acquisition of the image of the object to be acquired is facilitated.
In one embodiment, thelight source part 120 may be a display panel.
For example, the display panel may be selected from: liquid crystal display screen, active array organic light emitting diode display screen and little light emitting diode display screen.
In one embodiment, the light-transmissive cover plate 110 may be made of a glass material.
In one embodiment, thesensor module 131 may be a photosensor. Thetransparent cover 110 may be imaged based on the principle of total reflection of physical optics, and an image formed by total reflection of thetransparent cover 110 may be captured by the photosensor.
When theimage capturing device 100 is applied to fingerprint recognition under an optical screen, thefirst surface 110a of thetransparent cover 110 may be used for contacting a fingerprint, thesecond surface 110b of thetransparent cover 110 may be provided with thelight source component 120, and thelight source component 120 may be adapted to emit light signals in different directions toward thefirst surface 110a of thetransparent cover 110, where the light signals are totally reflected at thefirst surface 110a of thetransparent cover 110 to form totally reflected light in different directions, and the totally reflected light enters thesensor component 130 through thetransparent cover 110 and thelight source component 120 to be received. Since the intensity of the totally reflected light is modulated by the fingerprint profile, an image of the fingerprint can be obtained by collecting the totally reflected light emitted from thesecond face 120b of thelight source section 120.
For example, referring to fig. 1, when a finger is pressed to thefirst face 110a of the light-transmissive cover 110, one light ray emitted from the light-emitting point O1 may image the point a1 to the point B1 on the surface of thesensor module 131 on the left side of the drawing according to the principle of total reflection. Similarly, another light ray emitted from the light emitting point O1 can image the point a2 to the point B2 on the surface of thesensor module 131 on the right side of the figure.
Therefore, in a single collection period, on the basis of satisfying the total reflection condition, the image of the light emitted from the light-emitting point O on thesensor component 130 after total reflection is distributed over a plurality ofsensor modules 131. That is, the image acquired by eachsensor module 131 is a partial image of the object to be acquired. Further, the images acquired by thesensor modules 131 are spliced to obtain a complete image of the object to be acquired.
By last, adopt the scheme of this embodiment, can realize the large tracts of land formation of image with the mode that thesensor module 131 of polylith small area spliced mutually to satisfy the requirement of smart machine to the jumbo size screen, and data processing is fast, does benefit to and reduces manufacturing and cost of maintenance.
Specifically, the multi-block splicing mode adopted in this embodiment can effectively make up the defect that the whole large-size sensor component cannot be prepared by the existing production capacity, and the splicing number of thesensor modules 131 can be flexibly adjusted according to the screen occupation ratio of the intelligent device, so that theimage acquisition device 100 adopting the scheme of this embodiment can effectively meet the requirement of the intelligent device on the large-size screen.
Further, in the prior art, only one whole sensor component is provided, all pixel points on the sensor component need to output data when the collected image is output every time, and along with the increase of the area of the sensor component and the increase of the pixel points, the disposable output mode inevitably leads to the reduction of the data processing speed. In theimage capturing device 100 according to the embodiment of the present invention, eachsensor module 131 can be independently controlled, so that eachsensor module 131 can simultaneously and concurrently output data, thereby effectively increasing the data processing speed of theimage capturing device 100.
Further, since the prior art has only a one-piece sensor part, the replacement of the one-piece sensor part is required even if the sensor part is damaged only in a partial region, which undoubtedly increases the manufacturing and maintenance costs of the image pickup device. In theimage capturing device 100 according to the embodiment, since thesensor modules 131 are spliced, when a local area is damaged, only the damagedsensor module 131 can be replaced, so that the manufacturing and maintenance costs of theimage capturing device 100 are effectively reduced.
Fig. 3 is a flowchart of an image capturing method according to an embodiment of the present invention. The solution of the present embodiment can be applied to an optical under-screen image processing scene, for example, it can be executed by a smart device configured with theimage capturing apparatus 100 shown in fig. 1 and fig. 2, so as to acquire an image of an object to be captured contacting thetransparent cover plate 110. Wherein, the object to be collected can be a finger, and the image can be a fingerprint image.
Specifically, referring to fig. 3, the image capturing method according to this embodiment may include the following steps:
step S101, acquiring images respectively acquired by a plurality ofsensor modules 131, wherein each image is a local image of an object to be acquired;
and S102, splicing the acquired multiple images to obtain an image of the object to be acquired.
In one embodiment, the plurality ofsensor modules 131 are distributed on the same plane, as shown in fig. 2. Accordingly, referring to fig. 4, the step S102 may include the steps of:
step S1021, for each sensor module, according to the position of the sensor module on the plane, determining the position of the image acquired by the sensor module in the image of the object to be acquired;
step S1022, stitching the plurality of images according to the determined positions to obtain the image of the object to be acquired.
Referring to fig. 2, taking thesensor module 131 located at the upper left corner as an example, according to the arrangement position of thesensor module 131 in thesensor component 130 composed of the plurality ofsensor modules 131, it can be determined that the image acquired by thesensor module 131 belongs to the upper left corner of the image of the object to be acquired.
Similarly, the image captured by thesensor module 131 located at the upper right corner belongs to the upper right corner of the image of the object to be captured.
Similarly, the image captured by thesensor module 131 located at the lower left corner belongs to the lower left corner of the image of the object to be captured.
Similarly, the image captured by thesensor module 131 located at the lower right corner belongs to the lower right corner of the image of the object to be captured.
Therefore, the images acquired by thesensor modules 131 are spliced according to the arrangement positions of thesensor modules 131 on the plane, so that the image of the object to be acquired can be obtained.
In one embodiment, referring to fig. 1 and 2, the plurality ofsensor modules 131 acquire an image under the illumination of thelight source component 120, wherein thelight source component 120 includes a plurality of light emitting points O arranged in an array, and the plurality of light emitting points O sequentially emit light in a manner of array translation in a plurality of acquisition periods.
Accordingly, referring to fig. 5, after the step S1022, the step S102 may further include the steps of:
step S1023, acquiring a plurality of images of the object to be acquired, which are respectively spliced in a plurality of acquisition cycles, wherein the image of the object to be acquired, which is spliced in the current acquisition cycle, is translated for a preset distance in a first direction relative to the image of the object to be acquired, which is spliced in the previous acquisition cycle, and the first direction is parallel to the translation direction of the array;
step S1024, determining the image of the object to be acquired, which is obtained by splicing in each acquisition period, as an image to be processed;
step S1025, translating the plurality of images to be processed along a second direction so as to align the translated plurality of images to be processed and obtain a processed image, wherein the second direction is consistent with the first direction;
and step S1026, generating a final image of the object to be acquired based on the processed image.
For example, referring to fig. 1, it is assumed that the light-emitting point O2 belongs to the light-emitting array and the light-emitting point O1 belongs to the non-light-emitting array in the first acquisition cycle. According to the total reflection principle, the light emitted from the light emitting point O2 is totally reflected and then irradiated to the point C. However, the point C is the boundary between twoadjacent sensor modules 131, and therefore, an image of the point A3 cannot be imaged at the point C, so that a blank area exists at the point A3 in the images of the object to be acquired, which are stitched in the first acquisition cycle. The image of the object to be acquired, which is obtained by stitching in the first acquisition cycle, is then recorded as the image to be processed 1.
Assuming that the array of light emission points O is shifted in the right-to-left direction as shown, the light emission position is shifted from light emission point O2 to light emission point O1 in the second acquisition cycle. According to the total reflection principle, the light emitted from the light emitting point O1 is imaged to the point B3 on the surface of theright sensor module 131 after being totally reflected. Therefore, an image of the point A3 can be imaged at the point B3, so that the point A3 in the image of the object to be captured, which is stitched in the second capturing period, has image content. The image of the object to be acquired, which is obtained by stitching in the second acquisition cycle, is then recorded as the image to be processed 2.
Further, a blank area also exists at the corresponding position of the to-be-processed image 2 at the point C, and a translation corresponding relationship exists between the theoretical imaging of the to-be-processed image 2 at the corresponding position of the point C and the theoretical imaging of the to-be-processed image 1 at the corresponding position of the point C. Therefore, all blank areas can be complemented by translating and aligning the images to be processed which are spliced in the plurality of acquisition periods, and a complete image of the object to be acquired is obtained.
Further, according to the total reflection principle, as the light emitting point O is translated from right to left as shown in fig. 1, the images of the object to be acquired on thesensor modules 131 are synchronously translated from left to right, and therefore, in order to align the same region of the image to be processed obtained in each acquisition cycle, the image to be processed obtained in the next acquisition cycle needs to be translated to a position where the image to be processed obtained in the previous acquisition cycle is located at the same reference point.
The reference point may be obtained by presetting, for example, the position of the leftmost light-emitting point O in the first acquisition period is determined as the reference point.
Thus, in the step S1025, the translation direction of the image to be processed (i.e., the second direction) is the same direction as the array translation direction of the light emitting points O (i.e., the first direction). For example, the image to be processed 2 is translated to the left to be aligned with the image to be processed 1, such as to align the outer contours of the two.
Therefore, after the images to be processed obtained in each acquisition period are respectively translated to be aligned, blank areas in other images to be processed can be complemented based on different images to be processed, and a complete image of the object to be acquired is obtained.
Further, referring to fig. 6, the step S1026 may include the steps of:
step S10261, judging whether the integrity of the processed image reaches a preset threshold value;
when the determination result of the step S10261 is negative, that is, when the integrity of the processed image is smaller than the preset threshold, continuing to execute the steps S101 to S1026 to perform image acquisition in the next acquisition cycle, and translating the acquired image to be aligned with the processed image along the second direction to obtain an updated processed image;
further, the step S10261 is repeatedly executed to determine whether the integrity of the updated processed image reaches the preset threshold value, until the integrity of the updated processed image reaches the preset threshold value;
step S10262, determining the updated processed image as the final image of the object to be captured.
In an embodiment, referring to fig. 7, after the step S102, the image capturing method according to this embodiment may further include the following steps:
step S103, judging whether the integrity of the image of the object to be acquired reaches a preset threshold value;
when the judgment result in the step S103 is negative, that is, when the integrity of the image of the object to be acquired is smaller than the preset threshold, executing step S104 to determine thesensor module 131 corresponding to the blank area in the image of the object to be acquired;
step S105, acquiring an image acquired by thesensor module 131 corresponding to the blank area in the next acquisition cycle, and repeatedly executing the step S102 to splice the acquired image to the image of the object to be acquired so as to obtain an updated image of the object to be acquired;
further, the step S103 is repeatedly executed to determine whether the integrity of the updated image of the object to be acquired reaches a preset threshold, and when the integrity of the updated image of the object to be acquired reaches the preset threshold, the step S106 is executed to determine that the updated image of the object to be acquired is the final image of the object to be acquired.
Therefore, eachsensor module 131 is independently controlled and independently outputs data, so that when the images of the object to be acquired obtained by splicing are partially lacking, the scheme of the embodiment can control thesensor module 131 of the default part to independently output the images in the next acquisition period and splice the images to the images of the object to be acquired obtained by splicing in the previous acquisition period, so as to obtain the complete image of the object to be acquired. Further, during the next acquisition cycle, theother sensor modules 131 may be in a sleep state to save power consumption of theimage acquisition apparatus 100 and the smart device.
Specifically, the preset threshold may be 90%. In practical application, a person skilled in the art can adjust the specific value of the preset threshold value as needed to meet the requirements of different intelligent devices on the safety of the device and the accuracy of fingerprint identification.
In a variation, the embodiments shown in fig. 6 and 7 can be combined to obtain a new embodiment, thereby flexibly adapting to diversified application scenarios.
For example, when the determination result of the step S10261 indicates that the integrity of the processed image is smaller than the preset threshold and the blank areas are more concentrated, the steps S104 and S105 may be executed to separately obtain the image acquired by thesensor module 131 corresponding to the blank area in the next acquisition cycle and to stitch the image to the processed image until the integrity of the updated processed image reaches the preset threshold.
In a fingerprint unlocking scene, images of objects to be acquired, which are respectively spliced in a small number of acquisition periods, can be translated and aligned to obtain a processed image, and when the integrity of the processed image reaches the preset threshold value, the fingerprint identification can be determined to be successful, so that the unlocking operation is completed.
However, if the integrity of the processed image obtained based on a small number of acquisition cycles is less than the preset threshold, the image acquisition and processing operations of the next acquisition cycle may be continued until the integrity of the updated processed image reaches the preset threshold.
In a typical application scenario, during the fingerprint entry phase, the acquisition period may be 10 to 20; in the fingerprint unlocking stage, the acquisition period can be 3 to 4.
Therefore, the complete image of the object to be acquired can be accurately synthesized based on the images acquired by the splicedsensor modules 131, and image distortion or local deletion is avoided.
Fig. 8 is a block diagram of an electronic device according to an embodiment of the present invention.
Specifically, theelectronic device 80 may include: animage acquisition device 81; aprocessor 82, theprocessor 82 being coupled with theimage acquisition device 81; amemory 83, saidmemory 83 having stored thereon a computer program which, when executed by a processor, carries out the steps of the method as described above with reference to fig. 3 to 7.
Further, with respect to the structure and function of theimage capturing device 81, reference may be made to the description related to theimage capturing device 100 shown in fig. 1 and fig. 2, which is not repeated herein.
In one embodiment, theelectronic device 80 may have an optical underscreen fingerprint recognition function, and theimage capturing apparatus 81 may include: a light-transmissive cover plate 110; thelight source component 120 and thesensor component 130, wherein thesensor component 130 is formed by splicing a plurality ofsensor modules 131.
Specifically, theimage capturing device 81 may perform imaging based on a total reflection principle of physical optics, and may obtain an image of an object to be captured, which is placed on thefirst surface 110a of thetransparent cover plate 110, by stitching images captured by the plurality ofsensor modules 131.
In one embodiment, when the computer program stored in thememory 83 is executed by theprocessor 82 to perform the method shown in fig. 3 to 7, the images acquired by thesensor modules 131 may be acquired first, and then the images may be stitched to obtain the image of the object to be acquired.
In one embodiment, the light emitting points O of thelight source elements 120 are sequentially illuminated in an array-shifting manner during a plurality of acquisition cycles, and accordingly, the computer program is further adapted to, when executed by theprocessor 82 in the method of fig. 3 to 7: the images of the object to be acquired, which are respectively spliced in a plurality of acquisition periods, are translated and aligned to obtain the final image of the object to be acquired.
In one embodiment, theelectronic device 80 may be a mobile phone, a smart band, a wrist watch, or the like.
Embodiments of the present invention also provide a storage medium, on which computer instructions (also referred to as a computer program) are stored, and when the computer instructions are executed, the steps of the method shown in fig. 3 to 7 are executed.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

8. The image acquisition method according to claim 7, wherein the plurality of sensor modules acquire an image under the irradiation of a light source component, wherein the light source component comprises a plurality of light-emitting points arranged in an array, and the plurality of light-emitting points sequentially emit light in an array translation manner in a plurality of acquisition periods; the stitching the acquired plurality of images to obtain the image of the object to be acquired further comprises: acquiring a plurality of images of the object to be acquired, which are respectively spliced in a plurality of acquisition periods, wherein the image of the object to be acquired, which is spliced in the current acquisition period, is translated for a preset distance in a first direction relative to the image of the object to be acquired, which is spliced in the previous acquisition period, and the first direction is parallel to the direction of the array translation;
CN201910380813.0A2019-05-082019-05-08Image acquisition method and device, storage medium and electronic equipmentPendingCN111914593A (en)

Priority Applications (3)

Application NumberPriority DateFiling DateTitle
CN201910380813.0ACN111914593A (en)2019-05-082019-05-08Image acquisition method and device, storage medium and electronic equipment
US16/869,318US11582373B2 (en)2019-05-082020-05-07Image capturing apparatus and method, storage medium and electronic equipment
TW109115214ATWI811540B (en)2019-05-082020-05-07 Image acquisition method, device, storage medium, and electronic equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910380813.0ACN111914593A (en)2019-05-082019-05-08Image acquisition method and device, storage medium and electronic equipment

Publications (1)

Publication NumberPublication Date
CN111914593Atrue CN111914593A (en)2020-11-10

Family

ID=73242553

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910380813.0APendingCN111914593A (en)2019-05-082019-05-08Image acquisition method and device, storage medium and electronic equipment

Country Status (1)

CountryLink
CN (1)CN111914593A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20040114784A1 (en)*2002-11-122004-06-17Fujitsu LimitedOrganism characteristic data acquiring apparatus, authentication apparatus, organism characteristic data acquiring method, organism characteristic data acquiring program and computer-readable recording medium on which the program is recorded
JP2006275609A (en)*2005-03-282006-10-12Toppan Printing Co Ltd Periodic pattern unevenness inspection apparatus and unevenness inspection method
EP2275969A1 (en)*2006-04-102011-01-19Electrolux Home Products Corporation N.V.Household appliance with fingerprint sensor
CN106022292A (en)*2016-05-312016-10-12京东方科技集团股份有限公司Display device and fingerprint identification method thereof
CN106067005A (en)*2016-06-032016-11-02成都艾德沃传感技术有限公司Fingerprint recognition system processing method and fingerprint recognition system
CN108133175A (en)*2017-11-302018-06-08北京集创北方科技股份有限公司Fingerprint identification method, device and system, electronic equipment
CN109416737A (en)*2018-09-212019-03-01深圳市汇顶科技股份有限公司Fingerprint identification device and electronic equipment
CN109564624A (en)*2018-10-312019-04-02深圳市汇顶科技股份有限公司Recognize the method, apparatus and electronic equipment of fingerprint Logo
CN109690567A (en)*2018-12-142019-04-26深圳市汇顶科技股份有限公司 Fingerprint identification devices and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20040114784A1 (en)*2002-11-122004-06-17Fujitsu LimitedOrganism characteristic data acquiring apparatus, authentication apparatus, organism characteristic data acquiring method, organism characteristic data acquiring program and computer-readable recording medium on which the program is recorded
JP2006275609A (en)*2005-03-282006-10-12Toppan Printing Co Ltd Periodic pattern unevenness inspection apparatus and unevenness inspection method
EP2275969A1 (en)*2006-04-102011-01-19Electrolux Home Products Corporation N.V.Household appliance with fingerprint sensor
CN106022292A (en)*2016-05-312016-10-12京东方科技集团股份有限公司Display device and fingerprint identification method thereof
US20180173926A1 (en)*2016-05-312018-06-21Boe Technology Group Co., Ltd.Display device and fingerprint identification method thereof
CN106067005A (en)*2016-06-032016-11-02成都艾德沃传感技术有限公司Fingerprint recognition system processing method and fingerprint recognition system
CN108133175A (en)*2017-11-302018-06-08北京集创北方科技股份有限公司Fingerprint identification method, device and system, electronic equipment
CN109416737A (en)*2018-09-212019-03-01深圳市汇顶科技股份有限公司Fingerprint identification device and electronic equipment
CN109564624A (en)*2018-10-312019-04-02深圳市汇顶科技股份有限公司Recognize the method, apparatus and electronic equipment of fingerprint Logo
CN109690567A (en)*2018-12-142019-04-26深圳市汇顶科技股份有限公司 Fingerprint identification devices and electronic equipment

Similar Documents

PublicationPublication DateTitle
CN107066162B (en)Display panel and display device
US20200327296A1 (en)Optical fingerprint identification apparatus and electronic device
US10810392B2 (en)Barcode reader
CN111095284B (en)Fingerprint detection device, fingerprint detection method and electronic equipment
US8306287B2 (en)Biometrics authentication system
CN107798289A (en)Biological image sensing system with variable light field
CN106022292A (en)Display device and fingerprint identification method thereof
CN205656407U (en)Display device
US11068692B2 (en)Image capturing device under screen and electronic equipment
CN111971616B (en)Backlight module, display device and preparation method of backlight module
US20140125810A1 (en)Low-profile lens array camera
US11068684B2 (en)Fingerprint authentication sensor module and fingerprint authentication device
US11582373B2 (en)Image capturing apparatus and method, storage medium and electronic equipment
CN111914593A (en)Image acquisition method and device, storage medium and electronic equipment
US20210264625A1 (en)Structured light code overlay
US20190149686A1 (en)Contact Image Sensor and Image Scanning Device
TWM602666U (en)Electronic device with fingerprint sensor and high resolution display adapted to each other
US11769343B2 (en)Fingerprint sensor, fingerprint module, and terminal device
CN214540788U (en)Optical fingerprint identification module
US11356583B2 (en)Image capturing apparatus, electronic equipment and terminal
CN205451322U (en)POS machine with bar code recognition engine
CN209401041U (en) Biometric modules, mobile terminals and electronic devices
US9942479B2 (en)Electronic device and imaging method thereof
CN111985415B (en)Display panel, display device and control method of display panel
JP6001361B2 (en) Electronic component mounting method, component mounting head, and electronic component mounting apparatus

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
WD01Invention patent application deemed withdrawn after publication

Application publication date:20201110

WD01Invention patent application deemed withdrawn after publication

[8]ページ先頭

©2009-2025 Movatter.jp