CROSS REFERENCE TO RELATED APPLICATIONSThis application is a continuation-in-part of copending U.S. application Ser. No. 08/824,288, filed Mar. 26, 1997 and U.S. Ser. No. 08/797,552 filed Jan. 31, 1997 entitled “Portable Data Collection Device With Variable Focusing Module for Optic Assembly”. The aforesaid copending applications are incorporated herein in their entireties by reference.[0001]
FIELD OF THE INVENTIONThe present invention relates to a portable data collection device including a two dimensional photosensor array imaging assembly and, more particularly, to a portable data collection device having a two dimensional photosensor array imaging assembly selectively actuatable to read a bar code dataform and record an image of an item of interest and further having an optic assembly with a variable focusing module to change a best focus distance of the optic assembly.[0002]
BACKGROUND OF THE INVENTIONPortable data collection devices are widely used in manufacturing, service and package delivery industries to perform a variety of on-site data collection activities. Such portable data collection devices often include integrated bar code dataform readers adapted to read bar code dataforms affixed to products, product packaging and/or containers in warehouses, retail stores, shipping terminals, etc. for inventory control, tracking, production control and expediting, quality assurance and other purposes. Various bar code dataform readers have been proposed for portable data collection devices including laser scanners and one dimensional (1D) charge coupled device (CCD) imaging assemblies, both of which are capable of reading ID bar code dataforms, that is, bar codes consisting of a single row of contrasting black bars and white spaces of varying widths. Both of these readers are also capable of reading a “stacked” two dimensional (2D) bar code dataforms such as PDF-417, which has row indicator patterns utilized by the reader for vertical synchronization.[0003]
A two dimensional (2D) imaging based dataform reader has been proposed in U.S. application Ser. No. 08/544,618, filed Oct. 18, 1995 and entitled “Extended Working Range Dataform Reader Including Fuzzy Logic Image Control Circuitry”, now issued as U.S. Pat. No. 5,702,059 on Dec. 30, 1997. The 2D dataform reader disclosed in application Ser. No. 08/544,618, includes an imaging assembly having a two dimensional array of photosensors or photodiodes adapted to read 2D bar code dataforms (e.g., PDF-417, Supercode, etc.) with vertical synchronization row indicator patterns as well as matrix dataforms (e.g., MaxiCode, Data Matrix,[0004]Code 1, etc.) which do not include vertical synchronization patterns. The photosensors correspond to image pixels of a captured image frame and the terms “photosensors” and “pixels” will be used interchangeably. The 2D dataform reader disclosed in U.S. Pat. No. 5,703,059 utilizes an open loop feedback control system including fuzzy logic circuitry to determine proper exposure time and gain parameters for a camera assembly. U.S. Pat. No. 5,702,059 is incorporated in its entirety herein by reference.
While using a portable data collection device to sequentially read bar code dataforms affixed to products or containers in a production facility, warehouse or retail store, an operator may come upon an item which is damaged, incomplete, mislabeled, in the wrong location, etc. In such a event, it would be desirable for the operator to make a note of the problem item so that appropriate corrective action may be taken by supervisory personnel. However, requiring the operator to make a handwritten notation on a clipboard or input information concerning the item using a keyboard or keypad of the portable data collection device is both time consuming and error prone. What is needed is a portable data collection device having a 2D imaging assembly that can be actuated to read bar code dataforms by depressing a trigger and, when a problem item is found, the imaging assembly can be actuated with a separate trigger to record an image of the problem item. This would enable “information”, that is, an image of the problem item, to be recorded without seriously interrupting the normal course of the operator's work. Additionally, it would be desirable to transmit the recorded image of the problem item to appropriate supervisory personnel so that appropriate corrective action may be taken. In certain instances, it may be sufficient to record a single frame of the image of a problem item, while in other cases, for example, if the item is larger than a field of view or target area of the imaging assembly, it may be necessary to record a continuous video image of the problem item to permit the operator to record a complete view of the item. It would also be desirable to provide an audio capture module to simultaneously capture the operator's voice, enabling the operator to provide further identification and/or commentary on the problem item to aid supervisory personnel in locating the item and taking appropriate corrective action.[0005]
Additionally, what is needed is a portable data collection device including an illumination assembly and a viewing assembly to assist the operator in properly aiming and positioning the portable data collection device with respect to a target object such that the target object is within a target area of the imaging assembly. A size of a target area of the imaging assembly is defined by a field of view of the imaging assembly and a distance between the imaging assembly and the target object. The target object may be a dataform to be read or an item to be imaged. Preferably the illumination assembly will include targeting optics which will project a “crosshair” shaped targeting beam of visible light corresponding to the field of view of the imaging assembly to aid an operator in aiming the device at the target object.[0006]
A viewing assembly would permit the operator to visualize the target area and the target object. Visualizing the target area of the image assembly would facilitate proper alignment of the target area and the target object thus insuring that the device is properly aimed. Further, visualizing the imaging target area and the target object would aid the operator in positioning the device relative to the target object such that the target object is encompassed within an outer perimeter of the target area.[0007]
Furthermore, in package delivery applications, upon delivery of a package, the delivery person typically uses a portable data collection device to read a bar code dataform affixed to the delivered package. Normally, the delivery person also obtains a signature of the person receiving the package. Typically, the signature of the person receiving the package is on a sheet of paper that must be filed with the package delivery records or on a signature capture digitizer pad so that the signature may electronically filed.[0008]
What is needed is a portable data collection device having a 2D imaging assembly that can be actuated to read a bar code dataform by depressing one trigger and can be actuated by a separate trigger, or applications software, to record an image of a signature of a person receiving a package so that the signature can be filed electronically.[0009]
As an alternative to using one trigger to read a bar code dataform and using the second trigger to image an adjacent signature block with a recipient's signature included therein a single trigger could be used to image and decode a dataform and capture an image of the recipient's signature. If the dataform includes encoded data regarding the position of the signature block with respect to the dataform, output data could include decoded dataform data and data representing the portion of the captured image corresponding to the signature block area. What is needed is a portable data collection device that can be actuated by a single trigger to capture an image of a bar code dataform and an adjacent signature block, decode the bar code dataform, determine the position of the signature block, and output a compressed digitized representation of the portion of the image comprising the signature block for subsequent downloading to a remote device.[0010]
What is also needed is an optic assembly for focusing an image of the target area onto a two dimensional photosensor array wherein the optic assembly includes a focusing module to permit the best focusing distance of the optic assembly to be changed by the operator manually or changed automatically in response to a signal representative of the sharpness of an image of a target area.[0011]
SUMMARY OF THE INVENTIONIn accordance with this invention, a portable data collection device is provided that includes a two dimensional (2D) photosensor array imaging assembly selectively actuatable for reading bar code dataforms (bar code dataform reading mode) and recording an image of an item in the imaging assembly's target area (imaging mode). A size of the target area is dependent on a field of view of the imaging assembly and a distance between the imaging assembly and a target object, the object being either a dataform to be read or an item to be imaged. The portable data collection device includes two trigger switches, a first trigger actuatable for reading a bar code dataform and a second trigger actuatable for recording an image of an item in the target area. In a radio embodiment of the portable data collection device of the present invention, a radio module is provided for transmitting an output signal to a remote device. In a batch embodiment of the portable data collection device of the present invention, an output signal is coupled to a terminal processing board for further processing and storage.[0012]
The imaging assembly of the portable data collection device of the present invention further includes control and selection circuitry which receives input signals from an operator of the portable data collection device and determines and formats an appropriate output signal. The output signal may include data from a decoded dataform imaged in a captured image frame, a compressed representation of a captured image, an uncompressed representation of a captured image, or a combination of these. If the desired output signal is decoded dataform data, the selection circuitry will utilize image processing and decoding circuitry to decode the dataform.[0013]
Alternately, if the desired output signal is to represent an image of a field of view of a camera assembly of the imaging assembly, the selection circuitry may output the entire frame of image data from the buffer memory or, if appropriate, invoke a compression module to compress the image to reduce the quantity of data to be transmitted by a radio module of the portable data collection device to a remote device or to be output to a terminal processing board of the portable data collection device.[0014]
As discussed, the portable data collection device of the present invention includes two manually activated trigger switches for controlling the selection circuitry to select between a imaging capture mode and a dataform decoding mode. A first trigger switch, the dataform decoding trigger, institutes the dataform decoding mode and signals the selection circuitry to output a decoded representation of a dataform in a captured image frame. The second trigger switch, the imaging trigger, institutes the imaging mode and has two operating embodiments. In the first operating embodiment of the imaging mode, depressing the imaging trigger results in the imaging assembly capturing one frame of the field of view or target area of the camera assembly. In the second operating embodiment of the imaging mode, depressing the imaging trigger results in the imaging assembly continuously capturing successive frames as long as the trigger is depressed.[0015]
In a third operating embodiment of the portable data collection device of the present invention, activation of the dataform reading trigger will result in both decoded data and at least a portion of the captured image frame being output. This embodiment would advantageously be employed in a situation where a dataform is associated with, for example, a signature block in proximity to the dataform wherein the dataform includes encoded data setting forth the position of the signature block with respect to some predetermined location on the dataform. When the dataform decoding trigger is actuated, an image of the dataform and associated signature block is captured. The dataform is decoded and the decoded data is analyzed by the selection circuitry to determine the location of the signature block. The output signal includes both the decoded data and an image of the signature block.[0016]
Advantageously, the portable data collection device of the present invention includes a voice capture module which captures and digitizes sound received through a microphone mounted on the device during actuation of the second trigger. This feature enables an operator to “attach” a verbal message to the captured image. The digitized signal representing the captured sound portion is processed by a voice compression module prior to output to the radio module or the terminal processing board.[0017]
The imaging assembly includes a board camera assembly having a photosensor array assembly including a two dimensional (2D) array of photosensors or pixels and a control and decoder board. The control and decoder board includes decoding circuitry, image compression circuitry, control and selection circuitry, serial output circuitry, exposure parameter control circuitry and image buffering circuitry including signal processing circuitry and a frame buffer memory. The signal processing circuitry includes synchronization extractor circuitry and analog to digital (AID) converter circuitry for converting a composite video signal generated by the board camera assembly to digital image data. The decoding circuitry includes a decoder for decoding 1D and 2D bar code dataforms. The exposure parameter control circuitry includes fuzzy logic control circuitry for controlling the frame exposure period and gain adjustment of the board camera assembly.[0018]
The imaging assembly further includes an illumination assembly for illuminating a target item in the imaging assembly target area and an optic assembly for focusing reflected light from the target area upon the 2D array of photosensors of the photosensor array assembly.[0019]
The optic assembly includes a plurality of lens positioned to the front of the 2D photosensor array for focusing reflected light from the target area onto the photosensor array. A shroud supports the optic assembly and shrouds ambient illumination from the photosensor array. The optic assembly also includes a variable focusing module for varying the best focus distance of the optic assembly. The focusing module of the present invention permits clear imaging of an object as near as 5.5 inches (140 mm.) from a front lens of the optic assembly to as far as 36 inches (915 mm.) from the optic assembly, that is, the focusing module provides for a best focus range of 5.5 inches to 36 inches.[0020]
The board camera assembly includes the 2D photosensor array, exposure period control circuitry and gain control circuitry mounted on a printed circuit board. The illumination assembly includes an array of LED illuminators for uniformly illuminating the target area and two targeting LED illuminators for generating a cross hair illumination intensity pattern for aiming the portable data collection device appropriately. In a first embodiment of the illumination assembly, a lens array is disclosed having a first targeting optics which generates a first crosshair illumination pattern and a second targeting optics generating a second crosshair illumination pattern, the first and second illumination patterns coinciding at distance corresponding to a minimum value of the best focus range of the optic assembly, that is, at a distance approximately 5.5 inches (140 mm.) from the front lens of the optic assembly. In a second embodiment, a lens array is disclosed having a first targeting optics which generates a half frame and a crosshair illumination pattern and a second targeting optics which generates a complementary half frame and crosshair illumination pattern. At the minimum value best focus position, the first and second illumination patterns combine to generate a full frame and single crosshair illumination pattern.[0021]
The device further includes a viewing assembly to further aid in aiming and positioning the portable data collection device with respect to a target object. A pivoting member is manually pivotable into an upright position in a line of vision of the operator. The pivoting member defines an aperture. The operator holds the device at a fixed distance with respect to his or her viewing eye and looks through the aperture to view the target object. The aperture is sized such that when an operator viewing eye is approximately 56 millimeters (mm.) from the pivoting member, a view seen through the aperture is substantially equivalent to the target area of the imaging assembly. Thus, the operator may advantageously use the aperture both for properly aiming the device at the target object and for moving the device closer to or further away from the target object so that the target object is large as possible but still is imaged within a perimeter of the target area. When the operator does not desire to use the viewing assembly, the pivoting member is folded down out of the operators line of vision and out of harms way.[0022]
The portable data collection device of the present invention includes pistol-grip shaped housing enclosing circuitry of the device. An angled snout extending from a grip portion of the housing includes an opening through which a portion of the illumination assembly and optic assembly extend. A finger operated trigger is provided on a target facing surface of the housing. The trigger is depressed by an operator to actuate the imaging assembly to read a bar code dataform in the target area. A push button actuator extends through an opening of the housing spaced apart from the trigger. The push button actuator is located so as to be depressible by the operator's thumb as the housing is cradled in the operator's hand. Depressing the push button actuator actuates the imaging assembly to capture an image of the target area. A slider extends through a slotted opening the in the housing and is operatively connected to the focusing module. By changing position of the slider, a thickness of an optic through which reflected light passes is altered and the best focusing position of the optic assembly is correspondingly changed. In an alternate embodiment of the focusing module, image analysis circuitry is provided which analyzes gray scale values corresponding to a captured image frame and automatically changes the thickness of the focusing module optic to achieve the image of a target area.[0023]
The aforementioned and other aspects of the present invention are described in more detail in the detailed description and accompanying drawings which follow.[0024]
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a perspective view of a portable data collection device of the present invention with a pivoting member of a viewing assembly in a folded down position;[0025]
FIG. 1A is a back elevation view of a portion of a housing of the portable data collection device;[0026]
FIG. 2 is a perspective view of the portable data collection device with the viewing assembly pivoting member in an upright position;[0027]
FIG. 3 is a sectional view of a portion of a housing of the portable data collection device with the viewing assembly pivoting member in the folded down position;[0028]
FIG. 4 is a sectional view of a portion of the housing of the portable data collection device with the viewing assembly pivoting member in the upright position;[0029]
FIG. 5 is a view, partly in side elevation and partly in section, of the portable data collection device showing use of the viewing assembly to align the device with a target object;[0030]
FIG. 6 is a top plan view of the portable data collection device;[0031]
FIG. 7 is a front elevation view of the portable data collection device as seen from a plane indicated by the line[0032]7-7 in FIG. 6;
FIG. 8 is a perspective view of a modular portion of an imaging assembly of the portable data collection device of the present invention, the modular portion shown imaging a target dataform on an item;[0033]
FIG. 9 is a view of the modular portion of the imaging assembly of FIG. 8 with an upper half of a housing of the modular portion removed;[0034]
FIG. 10 is a schematic sectional view of an optic assembly of the modular portion of the imaging assembly of FIG. 8 including a focusing assembly;[0035]
FIG. 11 is an exploded perspective view of an illumination assembly of the modular portion of the imaging assembly of the present invention;[0036]
FIG. 12 is a perspective view of a lens array or front panel of the illumination assembly of FIG. 11;[0037]
FIG. 13 is an exploded perspective view of a targeting optics of the front panel of FIG. 12;[0038]
FIG. 14 is a front elevation view of the front panel of FIG. 12;[0039]
FIG. 15 is a back elevation view of the front panel of FIG. 12;[0040]
FIG. 16 is a sectional view of the front panel of FIG. 12 as seen from a plane indicated by the line[0041]16-16 in FIG. 14;
FIG. 17 is a sectional view of the front panel of FIG. 12 as seen from a plane indicated by the line[0042]17-17 in FIG. 14;
FIG. 18 is a sectional view of the front panel of FIG. 12 as seen from a plane indicated by the line[0043]18-18 in FIG. 14;
FIG. 19 is an exploded perspective view of an alternate embodiment of an illumination assembly of the modular portion of the imaging assembly of the present invention;[0044]
FIG. 20 is a perspective view of a lens array or front panel of the illumination assembly of FIG. 19;[0045]
FIG. 21 is an exploded perspective view of a targeting optics of the front panel of FIG. 20;[0046]
FIG. 22 is a front elevation view of the front panel of FIG. 20;[0047]
FIG. 23 is a back elevation view of the front panel of FIG. 20;[0048]
FIG. 24 is a sectional view of the front panel of FIG. 20 as seen from a plane indicated by the line[0049]24-24 in FIG. 22;
FIG. 25 is a sectional view of the front panel of FIG. 20 as seen from a plane indicated by the line[0050]25-25 in FIG. 22;
FIG. 25A is an exploded section view of a portion of the front panel shown in FIG. 25 as seen from a plane indicated by the[0051]line25A-25A in FIG. 22;
FIG. 26 is a sectional view of the front panel of FIG. 20 as seen from a plane indicated by the line[0052]26-26 in FIG. 22;
FIG. 27 is a representation of a crosshair illumination pattern generated by the illumination assembly of FIG. 11 superimposed on a target object;[0053]
FIG. 28 is a representation of a separation of crosshair illumination patterns of two targeting optics of the illumination assembly of FIG. 11 caused by imaging with the portable data collection device at a distance from a target object significantly different than a best focus position or distance of an optic assembly of the device;[0054]
FIG. 29 is a representation of an angular shift of crosshair illumination patterns of two targeting optics of the illumination assembly of FIG. 11 caused by imaging with the portable data collection device tilted such that the front panel is not substantially parallel to a surface of a target object;[0055]
FIG. 30 is a representation of a crosshair and half frame illumination pattern generated by a first targeting optics of the illumination assembly of FIG. 19;[0056]
FIG. 31 is a representation of a crosshair and half frame illumination pattern generated by a second targeting optics of the illumination assembly of FIG. 19;[0057]
FIG. 32 is a representation of a crosshair and full frame illumination pattern generated by the first and second targeting optics of the illumination assembly of FIG. 19;[0058]
FIG. 33 is a representation of a matrix dataform and an associated signature block;[0059]
FIG. 34A is one portion of a block diagram of selected circuitry of the portable data collection device of the present invention;[0060]
FIG. 34B is a second portion of a block diagram of selected circuitry of the portable data collection device of the present invention, the second portion matching the first portion shown in FIG. 34A;[0061]
FIG. 35 is a flow chart setting forth one operating embodiment of the portable data collection device of the present invention to decode a bar code dataform and capture an image of a target area;[0062]
FIG. 36 is a flow chart setting forth a second operating embodiment of the portable data collection device of the present invention to decode a bar code dataform and capture an image of a target area;[0063]
FIG. 37 is a flowchart setting forth a third operating embodiment of the portable data collection device of the present invention wherein a captured image frame includes a dataform and a signature block as shown in FIG. 16 and in which decoded dataform data and a portion of the capture image are output;[0064]
FIG. 38 is a perspective view of a support fixture for the focusing assembly of the optic assembly of FIG. 10;[0065]
FIG. 39 is a sectional view of the focusing assembly support fixture of FIG. 38;[0066]
FIG. 40 is front elevation view of a movable wedge shaped optic of the focusing assembly of the optic assembly of FIG. 10;[0067]
FIG. 41 is top plan view of the movable wedge shaped optic of FIG. 40 as seen from the plane indicated by the line[0068]41-41 in FIG. 40;
FIG. 42 is a schematic sectional view of an optic assembly of the modular portion of the imaging assembly of FIG. 8 including an alternate embodiment of a focusing assembly;[0069]
FIG. 42A is a side elevation view of a portion the focusing assembly of FIG. 42 as seen from a plane indicated by the[0070]line42A-42A in FIG. 42;
FIG. 43 is a schematic sectional view of an optic assembly of the modular portion of the imaging assembly of FIG. 8 including another alternate embodiment of a focusing assembly;[0071]
FIG. 44A is one portion of a block diagram of selected circuitry of the portable data collection device of the present invention including the focusing assembly shown in FIG. 43; and[0072]
FIG. 44B is a second portion of a block diagram of selected circuitry of the portable data collection device of the present invention, the second portion matching the first portion shown in FIG. 44A.[0073]
DETAILED DESCRIPTIONTurning to the drawings, a portable data collection device in accordance with the present invention is shown at[0074]10 in FIGS.1-7. Thedata collection device10 includes ahousing12 defining an interior region. Thehousing12 includes a grippingportion14 sized to be grasped in the hand of an operator and anangled snout16 extending from the gripping portion. With specific reference to FIG. 7, thesnout16 includes an opening through which a portion of a two dimensional (2D) photosensorarray imaging assembly18 extends. Theimaging assembly18 includes amodular portion20 and a control anddecoder board22 electrically coupled to the electronic circuitry in the modular portion. The control anddecoder board22 is supported within the grippingportion14 of thehousing12. Also supported within thehousing gripping portion14 is apower source24 such as a rechargeable battery for supplying operating power to the portabledata collection device10.
A dataform reading trigger switch or[0075]actuator26 extends through an opening in the grippingportion14. Also extending through an opening in the grippingportion14 is an imaging push button trigger switch oractuator28. Thedataform reading trigger26 is positioned to be depressed by an index finger of the operator while the grippingportion14 of thehousing12 is held in the operator's hand. Theimaging trigger28 is positioned to be depressed by a thumb of an operator while the grippingportion14 of thehousing12 is held in the operator's hand. Also extending through an opening in thehousing12 just above theimaging trigger28 is aslider29 moveable along a path of travel defined by a slotted opening29ain the grippingportion14. As will be discussed below, moving theslider29 causes a best focus position or distance of anoptic assembly43 of theimaging assembly18 to change thereby allowing the operator to change a focusing range of thedataform reader10. Theslider29 is positioned on thehousing snout16 to permit operation by the operator's thumb. Moving theslider29 to anend29b(FIG. 1A) of the slotted opening29acauses theoptic assembly43 to have a best focus distance at approximately 5.5 inches (140 mm.) in front of an outwardly facing surface90 of a forwardmost lens of theoptic assembly43. On the other hand, moving theslider29 to anopposite end29cof the slotted opening29acauses theoptic assembly43 to have a best focus distance at approximately 36 inches (915 mm.) in front of the outer surface90 of the forwardmost lens of theoptic assembly43.
The gripping[0076]portion14 also includes two small openings through which a distal portion of a red light emitting diode (LED)indicator30 and a distal portion of agreen LED indicator32 extend. Finally, thehousing12 includes an opening exposing a portion of amicrophone34 mounted in the housing interior region and another opening through which aradio antenna36 extends. The interior region of thehousing12 supports theimaging assembly18 and other electronic circuitry to be described below.
Referring to FIG. 8, which shows a perspective view of the[0077]modular portion20 of theimaging assembly18, it can be seen that the modular portion includes ahousing40 which supports anillumination assembly42 and aboard camera assembly38. Thehousing40 includes anupper portion39aand alower portion39bwhich advantageously are identically shaped and positioned symmetrically about apart line41. Theboard camera assembly38 includes theoptic assembly43 which focuses an image of atarget area44 onto aphotosensor array48. Thetarget area44 is defined by a field of view of theboard camera assembly38. Thetarget area44 will generally include atarget object45 such as a one or two dimensional bar code dataform or a matrix dataform to be decoded. Theillumination assembly42 includes fourillumination optic portions88a,88b,88c,88deach of which projects an even intensity distribution of illumination across thetarget area44.
FIG. 9 is a top view of the[0078]modular portion20 with theupper portion39aof thehousing40 removed. Theboard camera assembly38 includes a rear printedcircuit board52 and a front printedcircuit board54, both of which are secured in thehousing40 inslots56a,56b,56c,56d. A twodimensional photosensor array48 is positioned on a support49 (FIG. 10) affixed to afront surface54aof the front printedcircuit board54. Thephotosensor array48 receives reflected illumination from thetarget area44 focused through anoptic assembly43. Thesupport49 surrounds thephotosensor array48 and holds a thin piece ofquartz50 in spaced apart, parallel relationship with whtphotosensor array48. Thequartz piece50 has a thickness of 0.6 mm. and is spaced 1.310 mm. from thephotosensor array48. Thequartz piece50 has an index of refraction of 1.5443.
A[0079]shroud58 positions theoptic assembly43 with respect to thephotosensor array48 and shrouds ambient illumination from the array. Theillumination assembly42 includes a printedcircuit board60, alens array62 and two targetingLEDs64a,64b. Thelens array62 functions as the outer or front panel of themodular portion20. The term “front panel” will be used interchangeably with the term “lens array” throughout. A plurality ofexposure LEDs66 are disposed on the front surface of the printedcircuit board60 to direct illumination through thefront panel62 towards thetarget area44. Thecircuit board60 and thefront panel62 are secured inslots56e,56f,56g,56hin the upper andlower housing portion39a,39b. Securing theboard camera assembly38 and theillumination assembly42 in thesame housing40 assures that illumination is properly directed onto thetarget area44.
FIG. 10 shows a cross section of the[0080]camera assembly38 with theoptic assembly43 focusing an image of thetarget area44 including an image of thetarget object45 onto thephotosensor array48. The performance of the portabledata collection device10 is enhanced by theoptic assembly43 including a focusingassembly800 which provides theboard camera assembly38 with an extended, variable working range. The focusingassembly800 is operable to vary a best focus position or distance S2 (FIG. 10) of theoptic assembly43. The best focus position S2 is a distance from an outermost optic surface90 of aforwardmost lens43aof theoptic assembly43 to thetarget object45 at which the best or clearest image of the target object is focused on thephotosensor array48. The sharpness of the focused image gradually degrades as thetarget object45 is moved from the best focus position S2 towards a near field cut off distance S1. If thetarget object45 is a dataform, moving thetarget object45 closer than the near field cut off distance S1 would result in an image projected onto thephotosensor array48 that is undecodable. Similarly, the image sharpness gradually degrades as thetarget object45 is moved from the best focus position S2 towards a far field cut off distance S3. Assuming that thetarget object45 is a dataform, moving thetarget object45 further away than the far field cut off distance S1 would result in an image projected onto thephotosensor array48 that is undecodable.
The focusing[0081]module800 includes a focusingoptic810 comprising two wedge shapedlens820,830 which are congruent in shape and supported in a lens support fixture840 (seen in FIGS. 38 and 39). As can best be seen in FIGS. 10, 39 and41, thelens820,830, when viewed from above, define congruent triangles. The angles labeled “a” in thelens820,830 are both substantially 90 degree angles and the acute angle labeled “b” inlens820 is substantially equal to the acute angle labeled “b” inlens830. Thelens820,830 are supported by thefixture840 such that the flat,inclined surfaces822,832 oflens820,830 are parallel and adjacent. Further, outwardly facingflat surfaces824,834 are substantially parallel. The focusingoptic810 is positioned such that it is substantially perpendicular to a central ray c (FIG. 10) of reflected light from thetarget area44 which passes throughlens43a,43b,43c,43d,43eof theoptic assembly43. Thelens820,830 are preferably fabricated from type BK7 glass having a refractive index of 1.5168. Type BK7 glass is available from Schott Glass Technologies, Inc. of Duryea, Pa.
The focusing
[0082]module800 is configured such that an effective thickness t (FIG. 10) of the focusing
optic810 through which the reflected light passes may be changed by the operator of the portable
data collection device10 to vary the best focus position S
2 of the
optic assembly43. Correspondingly, the near field cut off distance S
1 and the far field cut off distance S
2 will also be changed as follows:
|
|
| Thickness | Best Focus Distance | Near Field Cutoff | |
| 1.0 mm. | 140 mm. (5.5 in.) | 65 mm. (2.5 in.) | 290 mm. (11.5 in.) |
| 1.726 mm. | 305 mm. (12.0 in.) | 90 mm. (3.5 in.) | 600 mm. (23.5 in.) |
| 2.136 mm. | 915 mm. (36.0 in.) | 150 mm. (6.0 in.) | Infinity |
|
The minimum value of S[0083]2 (5.5 in. or 140 mm.) will be referred to as MIN S2.
Suitable dimensions for the two
[0084]lens820,
830 comprising the focusing
optic810 are as follows:
|
|
| Lens | Description &Label | Dimension | |
|
| 820 | Height A (Fig. 41) | 10.00 mm. |
| 820 | Base B (FIG. 41) | 2.25 mm. |
| 820 | Width E (FIG. 40) | 4.00 mm. |
| 830 | Height C (FIG. 41) | 4.00 mm. |
| 830 | Base D (FIG. 41) | 0.90 mm. |
| 830 | Width E (FIG. 40) | 4.00 mm. |
|
The distance labeled D[0085]1 corresponds to a distance between anoptic surface110 of therearwardmost lens43eand theforward facing surface824 of thelens820. A suitable distance D1 is 3.1209 mm. The distance labeled D2 corresponds to a distance between the rearward facingsurface834 of thelens830 and thephotosensor array48. A suitable distance D2 is 4.4000 mm. The total distance DT between thephotosensor array48 and theoptic surface110 of therearwardmost lens43ethe sum of D1, D2 and focusing optic thickness t,
DT=D1+D2+t
=3.1209mm.+4.4000mm.+1.00mm.
=8.5209mm.
Operationally, the thickness t of the focusing[0086]optic810 is varied by moving or sliding thelens820 with respect to thelens820. As can best be seen in FIGS. 40 and 41, themoveable lens820 includes a pair of projectingflanges825,826 extending from top andbottom surfaces827,828 of thelens820. The projectingflanges825,827 slidingly engagerespective slots842,844 of spaced aparthorizontal plates846,848 of thesupport fixture840. Theslots842,844 function to guide the projectingflanges825,827 and, therefore, themoveable lens820 along a path of travel labeled T in FIG. 39.
A drive means[0087]850 is provided to move themoveable lens820 along the path of travel T. The drive means850 includes aflexible belt852 having oneend854 attached to abase surface829 of thelens820. Anopposite end856 of thebelt852 is attached to theslider29. Aportion29dof theslider29 extends through the slotted opening29ain an operator facing back side of thesnout14 of thehousing12. Theslider29 is slidably confined between a pair ofparallel ledges29e,29f(which can be seen in dashed line in FIG. 1A and one of which can be seen in FIG. 10) which extend outwardly from theinner surface16aof thehousing snout16. Theledges29e,29fhaveperipheral lips29gto further confine theslider29. As theslider portion986 is moved along the slottedopening988 in a direction labeled R in FIG. 10, thebelt852 moves in the same direction and thelens820 correspondingly moves. As can be seen in FIG. 10, thebelt852 is supported byguides858,860 extending from an inner surface of thehousing snout16. Thebelt852 extends through aguide862 defining an opening in themodular housing20 and anotherguide864 defining an opening in theshroud58. Thebelt852 further extends through aguide866 in avertical side plate868 of thefixture840. The fixture includes thevertical side plate868 and anothervertical side plate870 which function to maintain the proper spaced relation between thehorizontal plates846,848. Thefixture840 is secured to thefront side54aofcircuit board54 by fourscrews872 extending through openings inflanges874 and through thecircuit board54. Theflanges874 extend from thehorizontal plates846,848.
The[0088]belt852 is flexible enough the conform to the curves defined by theguides858,860 but is stiff enough to move themoveable lens820 along its path of travel T when theslider29 is moved along the slotted opening29a. When theslider29 is moved,lens820 moves along its path of travel T guided by the engagement of the projectingflanges825,827 and theslots842,844 of spaced aparthorizontal plates846,848. As themoveable lens820 moves the moveableoptic contact surface822 slides across the stationaryoptic contact surface832 thereby varying the thickness t of focusingoptic810, that is, varying the total distance the reflected light from thetarget area44 must traverse before reaching thephotosensor array48.
The[0089]stationary lens830 can be thought of as a compensation lens because it causes the focusingoptic810 to have a shape of a plate of glass with two parallel faces oriented such that the faces are normal or perpendicular to the incident reflected light no matter what the position of themoveable lens820. Different positions of themoveable lens820 are shown in FIG. 10. In solid line, a position of thelens820 at one end of its path of travel T is shown. This position corresponds to a minimum thickness t of the focusingoptic810. In the dashed line labeled820a, an intermediate position of thelens820 is shown corresponding to a medium thickness t of the focusingoptic810. Finally, in the dashedline label820b, a position of thelens820 at an opposite end of its path of travel T is shown. This position corresponds to a maximum thickness t of the focusingoptic810. When theslider29 is in the position shown in FIG. 10, that is, abutting theend29bof the opening29a, themoveable lens820 is at the position which results in a minimum thickness t of the focusingoptic810. As theslider29 is moved to a position abutting theopposite end29cof the opening29a, themoveable lens820 is at the position labeled820bwhich results in a maximum thickness t of the focusingoptic810.
By using the two wedge shaped[0090]lens820,830 as shown, theresulatant focusing optic810 is equivalent to a glass plate with parallel sides and variable thickness. Since the index of refraction of the focusing optic810 (1.5168) is greater than the index of refraction of air, inserting the optic810 between theinnermost lens43eof theoptic assembly43 and thephotosensor array48 will change the best focus distance S2. As the thickness of the focusingoptic810 increases, the best focus distance S2 also increases. Thus, imprinted on thehousing snout16 adjacent theend29bof the opening29ais the letter “N” indicating to the operator that moving theslider29 toward theend29bwill cause theoptic assembly43 reduce its best focus distance S2. Theopposite end29chas a letter “F” imprinted near it to indicate that moving the slider toward theend29cwill increase the best focus distance S2. Since the focusingoptic810 is essentially a glass plate with parallel sides, the reflected light passing through the optic810 is not subject to image shift or tilt.
When the focusing[0091]optic810 has it minimum thickness t of 1.0 mm., the best focus position MIN S2 is at 140 mm. (5.5 in.) from the outward facing optic surface90 of thelens43a. At the best focus position of 140 mm., the field of view ortarget area44 of theoptic assembly43 is generally rectangular in shape and having dimensions of approximately 82 mm. (3.2 in.) long by 62 mm. (2.4 in.) high. At a distance of 8.5 inches from the front surface90, the target area of theoptic assembly43 is approximately 127 mm. (5 inches) long by 95 mm. (3.75 inches) high. Theoptic assembly43 is capable of decoding a bar code dataform with narrow width bars (e.g., a bar code dataform with a minimum bar width of 0.015 in. (0.381 mm.) at the near field distance S1).
The preferred[0092]optic assembly43 includes the fivelens43a,43b,43c,43d,43eand ametal disk98 having apin hole aperture98awhich, as shown, includes eleven optic surfaces labeled90-110. In the preferred embodiment the rear mostoptic surface110 oflens43eis positioned 10.2 mm. to the front of thephotosensor array48, that is, the distance labeled DT in FIG. 10 is 10.2 mm.
The optic prescriptions for each of the optic surfaces are as follows:
[0093] |
|
| Radius of | | |
| Optic Surface | Surface Curvature | Diameter | Shape |
|
|
| 90 | R = 13.52 mm. | D = 8.8 mm. | convex |
| 92 | R = 5.3 mm. | D = 8.8 mm. | concave |
| 94 | R = 12.47 mm. | D = 7 mm. | convex |
| 96 | R = 19.9 mm. | D = 7 mm. | convex |
| 98 | Pinhole diameter 0.81 mm. |
| 100 | R = 6.76 mm. | D = 7 mm. | concave |
| 102 | R = 12.47 mm. | D = 7 mm. | concave |
| 104 | R = 158.52 mm. | D = 7 mm. | convex |
| 106 | R = 6.76 mm. | D = 7 mm. | convex |
| 108 | R = 28.08 mm. | D = 7 mm. | convex |
| 110 | R = 11.26 mm. | D = 7 mm. | convex |
|
The distance between successive optical surfaces
[0094]90-
110 is as follows:
| |
| |
| Optic Surface | Distance |
| |
| 90-92 | 0.77 mm. |
| 92-94 | 4.632 mm. |
| 94-96 | 2.32 mm. |
| 96-98 | 1.798 mm. |
| 98-100 | 0.805 mm. |
| 100-102 | 0.77 mm. |
| 102-104 | 0.327 mm. |
| 104-106 | 2.34 mm. |
| 106-108 | 0.178 mm. |
| 108-110 | 2.07 mm. |
| |
Such an optic assembly is available from Marshall Electronics, Inc. of Culver City, Calif.[0095]
An alternate optic assembly which includes a compact aspheric plastic doublette design can be found in U.S. patent application Ser. No. 08/494,435, filed Jun. 26, 1995, entitled “Extended Working Range Dataform Reader”, now issued as U.S. Pat. No. 5,811,784 on Sep. 22, 1998. U.S. Pat. No. 5,811,784 is incorporated in its entirety herein by reference.[0096]
Because the desired working range and field of view of the portable[0097]data collection device10 dictates that theoptic assembly43 have a large F# (F#5.6 or greater), theillumination assembly42 must provide adequate illumination of thetarget area44 during the exposure period so that enough reflected light is absorbed by thephotosensor array48 to generate a suitably bright image. However, the exposure period is normally limited to 0.01 seconds or less to minimize the smear effect of an operator's hand jittering during a dataform reading session. Therefore, theillumination assembly42 must provide adequate illumination to accommodate the large F# and short exposure time.
Proper exposure of the
[0098]photosensor array48 requires an object field illumination of 0.3 lux assuming an exposure period of 0.03 seconds and an F#1.2. To determine the proper object field illumination for a 0.01 second exposure period and an F#
13, the following formula is used:
Therefore, the minimum required object field illumination for this invention is 106 lux at the far field cut off distance S[0099]3.
Referring to FIG. 11, which is an exploded perspective view of the[0100]illumination assembly42, the printedcircuit board assembly60 includes a plurality of surface mountexposure illumination LEDs66. An acrylic orpolycarbonate lens array62 is positioned between the printedcircuit board assembly60 and thetarget area44 for directing the illumination from theexposure LEDs66 towards thetarget area44. Preferably, thelens array62 is a unitary structure fabricated from the material PMMA (polymethyl methacrylate). However, it should be appreciated that it could be fabricated from other suitable materials such as glass or a combination of glass optics supported in a molded panel or other suitable arrangement known to those skilled in the art. The printedcircuit board assembly60 includes printed conductors and apower lead112 operative for supplying power to theillumination LEDs66. A suitable surface mount illumination LED is produced by the MarkTech Corporation of Latham, N.Y., as Part No. MTSM735K-UR or MTSM745KA-UR. Eachillumination LED66 provides illuminosity of 285 milli candela (mcd) over an angular illumination field of about 68 degrees. The small footprint of eachillumination LED66 enables four LEDs to be placed in a row measuring less than 14 mm. The printedcircuit board assembly60 includes four banks of fourillumination LEDs66 totaling sixteen illumination LEDs providing 4560 mcd of uniform illumination over thetarget area44.
The[0101]lens array62 includes fourillumination optic portions88a,88b,88c,88deach of which are aligned with a corresponding bank ofillumination LEDs66. Theillumination optic portions88a,88b,88c,88ddirect a 68 degree angular illumination field from eachillumination LED66 into a uniform field having an angular field of view which substantially corresponds to the angular field of view of theoptic assembly43 which defines the target area44 (shown in FIGS. 8 and 9).
Referring to FIGS. 16 and 18, which show a horizontal cross section (FIG. 16) and a vertical cross section (FIG. 18) through the[0102]illumination optic portions88a,88b,88c,88d, it can be seen that each optic portion includes four vertically oriented cylindrical entry surfaces116, one positioned in front of eachLED66 and a horizontally orientedcylindrical exit surface118 positioned in front of each bank ofLEDs66. The vertically oriented cylindrical entry surfaces116 define the horizontal field of illumination and the horizontally orientedcylinders118 define the vertical field of illumination. This arrangement provides an even illumination intensity distribution across thetarget area44. The 4560 mcd of illumination provided by theillumination LEDs66 will provide an illumination intensity in excess of 106 lux at a distance of 8.5 inches from the outermost optic surface90 of theoptic assembly lens43a.
A
[0103]central opening67 in the printed
circuit board assembly60 provides an opening for the
shroud58 to extend through. The vertically oriented entry surfaces
716 have a radius of curvature of 2.50 mm. and a height I (FIG. 35) of 4.00 mm while the horizontally oriented exit surfaces
718 have a radius of curvature of 3.00 mm. and a width J (FIG. 36) of 13.75 mm. Referring to FIGS.
34-
36, suitable dimensions for the
lens array702 are as follows:
|
|
| Label | Description | Dimension |
|
| A | Height oflens array 62 | 21.75 mm. |
| B | Width oflens array 62 | 39.55 mm. |
| C | Diameter of center opening | 12.00 mm. |
| 67 of lens array 62 |
| D | Height between middle of |
| vertical entry surfaces 116 | 14.13 mm. |
| E | Thickness oflens array 62 | 1.95 mm. |
|
Referring again to FIG. 11, the[0104]illumination assembly42 also includes a targeting arrangement or assembly64 to aid in aiming thedevice10 at thetarget object45. The targeting assembly includes the targetingLED illuminators64a,64b, which, when energized, project illumination throughapertures68,70 in the printedcircuit board60 and into first and second targetingoptics72,74 respectively of thelens array62. The first and second targetingoptics72,74 are mirror images of each other and are identical in configuration. Each targeting optic generates a crosshair pattern of illumination CR1, CR2 (seen in FIG. 27) and, as will be discussed below, if thetarget object45 is at a proper distance for imaging, i.e., at the minimum best focus position MIN S2 of theoptic assembly43, the crosshairs CR1, CR2 will coincide or overlap producing a single rectangular crossing or crosshair pattern of illumination CR (FIGS. 11 and 27). The rectangular illumination pattern CR will have a height h (18 mm.) and a width w (18 mm.) (FIG. 11). Of course, the rectangular illumination pattern CR will not be a perfect intersecting line crosshair but rather will be characterized by an illumination intensity distribution or pattern having some visible “thickness” t (FIG. 11) but will nonetheless be suitable for aiming thedevice10.
The first and second targeting[0105]optics72,74, which are identical in configuration, are shown in cross section in FIGS. 17 and 18. The first targetingoptics72 comprises a lens with an aspherical lightentry optic surface726 and a segmented cylindrical lightexit optic surface728. Thesecond targeting optics74 comprises a lens with an aspherical lightentry optic surface730, similar to the aspherical lightentry optic surface726, and a segmented cylindrical lightexit optic surface732, similar to the segmented cylindrical lightexit optic surface728.
The aspherical entry surfaces[0106]726,730 each have a diameter of 8 mm., a radius of curvature of 2.890 mm. and a conic constant of −2.534. The segmented cylindrical light exit surfaces728,732 each have an 8.0 mm. by 8.0 mm. square shaped outer perimeter. The segmentedcylindrical surface728 is comprised of four triangular shapedsections740,742,744,746 (FIG. 14) while the segmentedcylindrical surface732 is divided into four triangular shapedsections750,752,754,756, wherein the optic surfaces ofsections740 and750 are identical, the optic surfaces ofsections742 and752 are identical, the optic surfaces ofsections744 and754 are identical and the optic surfaces ofsections746 and756 are identical.
Upper and lower[0107]triangular sections740,744 comprise vertically oriented cylindrical light exit optic surfaces. Left and righttriangular sections742,746 comprise horizontally oriented cylindrical light exit optic surfaces. Similarly, upper and lowertriangular sections750,754 comprise vertically oriented cylindrical light exit optic surfaces, while left and righttriangular sections752,756 comprise horizontally oriented cylindrical light exit optic surfaces. The vertically oriented cylindrical optic surfaces740,744,750,754 have a radius of curvature of 25.00 mm. Similarly, the horizontally oriented cylindrical optic surfaces have a radius of curvature of 25.00 mm.
As can best be seen in FIG. 17, the horizontally and vertically oriented cylindrical optic surfaces[0108]742,746,740,744 are tipped at an angle c with respect to a longitudinal axis L-L though thelens array62 and, therefore, is also tipped at an angle c with respect to thetarget area44. The tip angle c of the horizontally oriented cylindrical optic surfaces742,746 shifts the horizontal position of the illumination rectangle or targeting crosshair CR1 (seen in FIG. 28) generated by the first targetingoptics72 such that it is horizontally centered in thetarget area44 while the tip angle c of the vertically oriented cylindrical optic surfaces740,744 shifts the vertical position of the targeting crosshair CR1 generated by the first targetingoptics72 such that it is vertically centered in thetarget area44. A suitable tip angle of c is 9.85 degrees.
Similarly, as can also be seen in FIG. 17, the horizontally and vertically oriented cylindrical optic surfaces[0109]752,756,750,754 are also tipped at an angle c which is preferably 9.85 degrees with respect to a longitudinal axis L-L though thelens array62. Note that the direction of tilt of the segmented cylindrical light exit surfaces728,732 are the same in magnitude but opposite in a direction of tilt, that is, thelight exit surface728 of the first targetingoptics72 slants downwardly to the left toward thefront side719 in FIG. 17, while thelight exit surface732 of the second targetingoptics74 slants downwardly to the right toward thefront side719 in FIG. 17. Also note that the two horizontally oriented light exit optic surfaces718 which would be seen in FIG. 17 (and in FIG. 25 discussed below with respect to an alternate embodiment of the illumination assembly42) have been removed for clarity of the drawing. It should also be noted that FIG. 13 which shows the segmented cylindricallight exit surface732 as being comprised of four individual exploded “pieces” is only a representation to provide additional clarity as to the shape and tilt of the four light exitingsurfaces750,752,754,756. Thelens array62 is fabricated as a single piece and the targetingoptics72,74 andillumination optics116,118 are formed in the single piece. The lens optics are not fabricated by “piecing” together individual optics. The same is true with respect to the optic “pieces” represented in FIG. 21 of the alternate embodiment of theillumination assembly42 shown in FIGS.19-26 to be discussed below.
Additional suitable dimensions, labeled on FIG. 17, for the aspheric light entry surfaces
[0110]726,
730, the segmented cylindrical light exit surfaces
728,
732 of the
lens array62 are as follows:
|
|
| Label | Description | Dimension |
|
| F | Maximum extension of aspheric | 1.75 mm. |
| light exit surfaces 726, 730 |
| fromback side 717 of |
| lens array 62 |
| G | Distance between maximum extension | 5.25 mm. |
| of aspheric light exit surfaces |
| 726, 730 and center of respective |
| segmented light exit surfaces 728, 732 |
| along centerlines T-T |
| H | Distance between centerlines T-T | 7.80 mm. |
| and outer edge oflens array 62 |
|
As noted above, the minimum best focus distance MIN S[0111]2 is 140 mm. (5.5 inches). If thedevice10 is oriented such that thelens array72 is substantially parallel to a surface of the target object45 (a dataform to be imaged and decoded) and positioned at the minimum best focus distance MIN S2 from thetarget object45, then the targeting crosshairs CR1 and CR2 will coincide and generate the single targeting crosshair CR as shown in FIGS. 11 and 27 having an approximate height h of 18 mm. (0.7 in.) and an approximate width w of 18 mm. (0.7 in.) which corresponds to thetarget area44 height of 62 mm. (2.4 in.) and a width of 82 mm. (3.2 in.) at the minimum best focus position MIN S2 of 140 mm. (5.5 inches) in front of the optic surface90.
If the[0112]device10 is moved away from the minimum best focus distance MIN S2 with respect to thetarget object45, the targeting crosshairs CR1 and CR2 will separate horizontally as shown in FIG. 28 thereby informing the operator that the distance of thedevice10 from thetarget object45 is not correct for best imaging or imaging and decoding. The operator will adjust the focusingoptic810 using theslider29 appropriately to compensate for the distance between the target object35 and theoptic assembly43. For example, if the distance between thetarget object45 and theoptic assembly43 is more than 36 inches and, if the distance for some reason cannot be reduces (e.g., because thetarget object45 is above the operator's outstretched arm and hand) the operator would use his or her thumb to move theslider29 to the “F”marked end29cof the slotted opening29aso as to increase the best focus distance S2 from its minimum value (140 mm. or 5.5 in.) to its maximum value (915 mm. or 36.0 in.). Of course, if the operator can move thedevice10 with respect to thetarget object45, the preferred mode of operation would be to have theslider29 at the “N”marked end29bof the slotted opening29aand adjust the device's distance from thetarget object45 such that the CR1 and CR2 crosshairs overlap. At that point, the target object distance will be 140 mm. (5.5 in.) and corresponding to theoptic assembly43 minimum best focus distance MIN S2 140 mm. (5.5 in.).
Finally, if the[0113]lens array702 is not substantially parallel to a surface of thetarget object45, that is, thedevice10 is tilted forward or backward from a position where a front surface719 (FIGS. 12 and 17) of the lens array orfront panel72 is parallel to the target object surface, the vertical portions of the illumination patterns of CR1 and CR2 will be angularly shifted or displaced as shown in FIG. 49, the greater the angle of tilt of thedevice10, the greater will be the angular shifting of the vertical portions of the illumination patterns CR1, CR2.
Referring again to FIGS.[0114]1-4, the portabledata collection device10 also includes aviewing assembly600. Theviewing assembly600 includes a pivotingmember602 which pivots between a folded down position (FIGS. 1 and 3) and an upright position (FIGS. 2 and 4). The pivotingmember602 includes arectangular opening604. Theopening604 is approximately 32 mm. in the horizontal direction, labeled606 in FIG. 2, and is approximately 24 mm. in the vertical direction, labeled608 in FIG. 2. The horizontal andvertical dimensions606,608 of theopening604 are chosen such that an angle of divergence or field of view of anoperator605 looking through theopening604 at a distance of approximately 56 mm., labeled ED in FIG. 5, is substantially the same as the field of view of theimaging assembly18. The ratio of thehorizontal dimension606 to the vertical dimension609 is chosen to correspond to the ratio of the horizontal dimension to the vertical dimension of the matrix of photosensors comprising the2D photosensor array48.
As can be seen in FIG. 5, when in an upright position, the pivoting[0115]member602 is in a line of vision of theoperator605. When theopening604 is position approximately 56 mm. from the operator's eye, aviewing area610 through theaperture604 substantially corresponds to thetarget area44 of theimaging assembly18.
The pivoting[0116]member602, when in the folded down position, is received in a well or recessedarea608 defined by an upper surface of thehousing snout16. In the folded down position, an upper surface612 (FIG. 3) of the pivotingmember602 is substantially flush with the snout upper surface. The snoutupper surface610 includes a recessed portion614 (FIGS. 1 and 2) sized to permit an operator's finger tip to slip under afront lip616 of the pivotingmember602 to permit the member to be popped up to the upright position from the folded down position. As can best be seen in FIGS. 3 and 4, the pivotingmember front lip616member602 fits under a slightly extendingupper edge617 of the snout upper surface to hold the pivoting member with a slight interference fit in the folded down position.
The pivoting[0117]member602 pivots on a pair ofcylindrical portions618 which extend from sides of the pivoting member near its bottom edge. Thecylindrical portions618 rotatably fit within corresponding cylindrical recesses in thesnout16. Turning to FIGS. 3 and 4, anarcuate biasing spring620 positioned in a recessedportion622 of thesnout16. The recessedportion622 is shaped to confine thespring620 with edge portions of the snout defining the recessed portion. Thespring620 has a humped middle portion which biases the pivotingmember602 to either the upright position or the folded down position.
In the preferred embodiment of the portable data collection device of the present invention, the[0118]photosensor array48 is part of theboard camera assembly38 commercially available from such vendors as Sharp or Sony of Japan. Referring to FIGS. 17A and 17B, the camera assembly, when activated, generates acomposite video signal262. Theboard camera assembly38 also includes aclock generator256,synchronization signal circuitry258 and analogsignal processing circuitry260 for reading illumination intensity values out of each photosensor of thephotosensor array48 and generating thecomposite video signal262.
The intensity of light incident on individual pixels or photosensors of the[0119]photosensor array48 varies somewhat uniformly from very bright (whitest areas of the image) to very dark (darkest areas of the image). The preferred2D photosensor array48 comprises an interlaced 752 by 582 matrix array of photodiode photosensors or image pixels (for a total of 437,664 pixels). Theclock generator256 coupled to a crystal oscillator and generates asynchronous clocking signals to read out charges accumulating on individual photosensors over an exposure period. The charges on the photosensors are read out through CCD elements adjacent the photosensor array photosensors. The charges are converted to avoltage signal250 wherein temporal portions of the voltage signal represent the changes accumulated on each photosensor. One CCD element is provided for reading out the charges on two photosensors thus two read outs of the photosensor array comprise one full image frame, the frame being comprised of two interlaced fields.
The[0120]camera assembly38 generates the composite analog video signal262 (FIG. 17A) corresponding to consecutive fields of the image incident on thephotosensor array48. Thevideo signal262 is termed “composite” because it includes synchronization signals generated by thesynchronization signal circuitry258 which correlate portions of the video signal to particular photosensors, interspersed among image signal portions wherein the signal magnitude represents charges on individual photosensors read out from a given row of thephotosensor array48.
The[0121]board camera assembly38 also includesgain control circuitry252 for controlling amplification of theimage signal253 and exposureperiod control circuitry254 for controlling a duration of an exposure period of the pixels. Both the exposureperiod control circuitry254 and thegain control circuitry252 are controlled by fuzzy logic exposure parameter control circuitry discussed with reference to FIG. 34A.
The synchronization signals[0122]268 generated bysynchronization signal circuitry258, theclock signal270, generated by theclock generator256, and thecomposite video signal253 are output to signalprocessing circuitry264 on the control anddecoder board22. Because the signal processing circuitry is configured to receive a composite video signal, it should be appreciated that selection of theboard camera assembly38 and its accompanying components for generating the composite video signal are not critical to the present invention.
Under the control of a[0123]microprocessor266 mounted on the control anddecoder board22, thevideo signal262 is input to thesignal processing circuitry264 along with clockingsignals268 and synchronization signals270. Thesignal processing circuitry264 includes synchronization extractor circuitry which receives the clocking signals268 and the synchronization signals270 and generates signals which are coupled to analog to digital converter circuitry (A/D converter circuitry)272 causing the A/D converter circuitry to periodically digitize thevideo signal262. The A/D converter circuitry272 includes an A/D converter generating an 8 bit value representing the illumination incident on a pixel of the array.
Direct memory access (DMA)[0124]control circuitry275 receives the synchronization signals270 and clock signals268 and generates address signals276acoupled to theframe buffer memory274 to indicate a storage location for each value generated by the A/D converter circuitry272.
Data signals[0125]276 representing the values generated by the A/D converter circuitry272 are coupled to theframe buffer memory274. Control andselection circuitry284 mounted on the control anddecoder board22 and coupled to theframe buffer memory274 receives successive image frames temporarily stored in theframe buffer memory274. Also coupled to the control andselection circuitry284 are the dataform readtrigger circuit26awhich, in turn, is coupled to thedataform reading trigger26 and an imagecapture trigger circuit28awhich, in turn, is coupled to theimaging trigger28.
When an operator institutes a dataform reading session (dataform reading mode) by depressing the[0126]dataform reading trigger26, the dataform readtrigger circuit26asends a signal to the control andselection circuitry284 causing the control and selection circuitry to couple a captured frame from theframe buffer memory274 to image processing anddecoder circuitry290.
The image processing and[0127]decoding circuitry290 includes adecoder292 for decoding 1D and 2D dataforms in thetarget area44. The image processing anddecoder circuitry290 operates on the stored frame of image data to extract dataform cell data (determine the black or white value of each cell of the dataform) and decode the cell data. Cell extraction is done in accordance with U.S. patent application Ser. No. 08/543,122 entitled, “Sub Pixel Dataform Reader With Dynamic Noise Margins”, filed Oct. 13, 1995 and assigned to the assignee of the present invention. The contents of application Ser. No. 08/543,122 are hereby incorporated by reference. Decoding of the cell data is accomplished by known decoding methods for each particular dataform format.
Also coupled to the control and[0128]selection circuitry284 isimage compression circuitry294 andserial output circuitry296. The control andselection circuitry284routes data298 representing a decoded dataform data directly from thedecoding circuitry292 to theserial output circuitry296. The decodeddataform data298 is not compressed prior to output to theserial output circuitry296. There is a possibility of error in the compression and subsequent decompression process and losing even a portion of a decoded dataform data may result in adverse consequences such as subsequent errors in updating inventory, determining the status or tracking an item, etc. Thus, the decodeddataform data298 is not compressed.
When an operator institutes an imaging session (imaging mode) by depressing the[0129]imaging trigger28, the imagecapture trigger circuit28asends a signal to the control andselection circuitry284 causing the selection circuitry to couple a captured frame from theframe buffer memory274 toimage compression circuitry294 to be compressed before being output to theserial output circuitry296 or directly to theserial output circuitry296 without being compressed.
Generally, the control and[0130]selection circuitry284 will be programmed to route the data representing a captured image frame to theimage compression circuitry294 because the occurrence of one or more errors in the data representing an image is normally not a significant problem. That is, an image of an item in thetarget area44 will still be recognizable and useful to supervisory personnel viewing the image reconstructed from the captured image frame data even if there is some slight distortion of the image. After compression of the image data by theimage compression circuitry294,compressed image data300 is routed to theserial output circuitry296. If, however, a high resolution image is needed, the control andselection circuitry284 may be appropriately programmed to route the data representing the captured frame directly to theserial output circuitry296.
The[0131]image compression circuitry294 utilizes an image compression algorithm to reduce the size of a set of digital image data. One such algorithm is the 2D wavelet transform compression algorithm as described in “A 64Kb/s Video Code Using the 2D Wavelet Transform” by A. S. Lewis and G. Knowles, published in IEEE Computer Society Press, Order No. 2202. The HARC Wavelet Transform System utilizing such technology is available from Houston Advance Research Center in Houston, Tex. and is capable of compressing photographic data with an image compression ratio of up to 400:1.
Because the portable[0132]data collection device10 is adapted for use in remote on-site locations for reading a dataform identifying a particular item or capturing an image of an item, it is desirable to enable theimaging assembly18 to also capture a verbal message from the operator. The control anddecoder board22 also includes avoice capture module304 for capturing and digitizing an operator's verbal message andvoice compression circuitry306 for compressing the captured verbal message. Thevoice capture module304 is coupled to themicrophone34 and is operable by the control andselection circuitry284 to capture and digitize audio input. Thevoice compression circuitry306 compresses a digitized voice signal.Data308 representing the compressed digitized voice signal is coupled to theserial output circuitry296.
For a predetermined period of time after either the[0133]dataform reading trigger36 is depressed to initiate a dataform reading session (dataform reading mode) or theimaging trigger28 is depressed to initiate a image capture session (imaging mode), the control andselection circuitry284 monitors the imagecapture trigger switch28. If the operator depresses thetrigger28 during the predetermined period, thevoice capture module304 andvoice compression circuitry306 are activated for verbal input. As long as the operator keeps the trigger depressed, thevoice capture module304 andvoice compression circuitry306 will remain activated so that the operator can speak into themicrophone34 and provide information concerning an item whose image was captured or whose dataform was read which will be transmitted and/or stored with the corresponding image or decoded dataform. Normally, thevoice capture module304 will be used subsequent to an imaging session where the operator wants to communicate to supervisory personnel reviewing the captured image some additional information concerning the imaged item such as the item's location, a short description of the problem with the item, etc. Thevoice compression circuitry306 utilizes one of a number voice compression algorithms well known to those skilled in the art.
Decoded dataform[0134]data298,compressed image data300 and compresseddigitized voice data308 are routed to theserial output circuitry296 which assemblesoutput data310 for serial output through aserial output port312. In portabledata collection device10 of the present embodiment theserial output port312 is coupled to an input port of aradio module314 mounted on the control and decoder board22 (shown schematically in FIG. 5). Theradio module314 modulates and transmits theoutput data310 to a remote device (not shown) where the transmitted data is demodulated. The demodulated output data may be used to update inventory, and/or accounting records, update production control expediting or product tracking files, permit supervisory corrective action to remove/repair damaged items, etc.
The control and[0135]decoder board22 further includes exposure parameters controlcircuitry316 which outputs control signals318,320 to the exposureperiod control circuitry254 and thegain control circuitry252 of thecamera assembly38 and asignal322 embodying an appropriate set of reference voltages for operating the A/D converter272. The exposure parameters controlcircuitry316 includesfuzzy logic circuitry324 which analyzes captured frames of data accessed from theframe buffer memory274. Thefuzzy logic circuitry324 analyzes a captured frame to determines if the current exposure period of the2D photosensor array48, the current amplification of thevideo signal250 by thegain control circuitry252 and the reference voltages used by the A/D converter circuitry272 are resulting in an “acceptable” captured image frame. If not, thecontrol signal318 is changed to adjust the exposure period of the2D photosensor array48 and/or thecontrol signal320 is changed to adjust the amplification of thevideo signal250 and/or thesignal322 is changed to adjust the operation of the A/D converter circuitry272. After the adjustment, another captured frame is analyzed by thefuzzy logic circuitry324 and, if necessary, further adjustments are made in an iterative fashion until thecamera assembly32 produces an “acceptable” captured image. A suitable exposure parameter control circuit including fuzzy logic control circuitry is disclosed in U.S. Pat. No. 5,702,059, previously referenced.
As can be seen in FIGS. 10 and 34A, the[0136]power source24 is coupled to the control anddecoder board22 to provide operating power to themicroprocessor266 and other circuitry mounted on the board and theradio module314.Power circuitry326 under the control of themicroprocessor266 is coupled through a lead328 to theillumination assembly42 and thecamera assembly38 to supply power to these components of theimaging assembly18.
The flow chart shown in FIG. 35 illustrates the operation of the[0137]imaging assembly18 in the dataform decoding mode and a first operating embodiment of the imaging mode. In the first operating embodiment of the imaging mode, a single frame of the image in thetarget area44 is captured, compressed and output when the operator depressed theimaging trigger28. The flow chart shown in FIG. 36 illustrates the operation of theimaging assembly18 in the dataform decoding mode and a second operating embodiment of the imaging mode. In the second operating embodiment of the imaging mode, successive frames of the image in thetarget area44 are captured, compressed and output as long as the operator has theimaging trigger28 depressed. The flowchart in FIG. 37 illustrates a third operating embodiment in which the imaging assembly is actuated in the dataform reading mode and to decode a dataform within the image area and to capture the digital image dataform selected image area such as a signature box. Theimaging system18 determines a position of the dataform in the target area and then determines the position of the signature box. The digital image data corresponding to the portion of the image area including the signature box is output in either compressed or noncompressed form through theserial output port312.
The imaging mode is advantageously employed when the operator using the portable[0138]data collection device10 notices theitem46 is damaged, out of place, incomplete, etc. The imaging mode of theimaging assembly18 is used to capture an image of theitem46 and, using theradio module314, transmit the captured image to a remote device accessible by supervisory personnel so that the problem may be ascertained by supervisory personnel and appropriate corrective action taken, e.g., deletion of item from inventory records, issuance of order to remove item from storage location and return to production facility or vendor for rework/repair, moving item to proper location, filing insurance claim, etc.
Turning to the first operating embodiment of the imaging mode shown in FIG. 35, at[0139]400 theimaging assembly18 waits for a signal representing either actuation of theimaging trigger28 or thedataform reading trigger26 to commence either an image capture session or a dataform reading session. The signal may be generated by the imagecapture trigger circuit28a, the dataform readingtrigger circuit26aor by a signal generated by customer specific application software. At402, upon receiving an appropriate signal, theimaging assembly18 is activated and a frame of image data captured and stored in theframe buffer memory274.
At[0140]404, thefuzzy logic circuitry324 determines if the captured image frame is acceptable, that is, the image is within predetermined acceptable ranges for brightness and the magnitude of charges on the photosensors of the2D photosensor array48. If thefuzzy logic circuitry324 determines the captured frame is not acceptable, one or more of the operating parameters of thecamera assembly38 and the A/D converter circuitry272 are modified as shown atstep406. The loop represented bysteps402,404,406 are repeated until the captured frame is determined to be acceptable.
At[0141]step408, if the control andselection circuitry284 determines that the activation signal is from thedataform reading trigger26 requiring a dataform decode, the captured frame is coupled to the image processing anddecoder circuitry290 for attempted decoded of the dataform represented in the captured frame. Atstep410, thedecoding circuitry292 attempts to decode the dataform represented in the captured frame. Atstep412, a determination is made if the decoding was successful. Atstep414, if the decoding was successful, the extracted decoded data is output to theserial output circuitry296 and atstep416, thegreen LED indicator32 is energized for a predetermined time to signal the operator that thedataform45 in thetarget area44 has been successfully read. Subsequently, theimaging assembly18 is turned off.
If at[0142]step412, the decoding was not successful, the selection circuitry at energizes thered LED indicator30 for a predetermined time to signal to the operator that the decoding was unsuccessful and that he or she should continue to point thedevice10 at thedataform45 in thetarget area44. The process returns to step402 where another image frame is capture and the remaining steps are repeated.
If at[0143]step408, the control andselection circuitry284 determines that the activation signal is from theimaging trigger28, the captured frame is routed to imagecompression circuitry294 to compress the data in the captured frame, shown atstep418. Atstep420, the compressed image data is output to theserial output circuitry296 and thegreen LED indicator32 is energized to signal the operator that the image in thetarget area44 has been successfully captured.
Referring to FIG. 36, in a second operating embodiment of the imaging mode, successive frames of an image of the[0144]target area44 are captured for as long as the operator maintains theimaging trigger28 depressed. This operating embodiment would be advantageous in situations where theitem46 which the operator wishes to image because of some defect, damage, etc., is very large compared to the area of thetarget area44. Therefore, capturing a single image frame and transmitting a signal corresponding to the captured frame to a remote device or supervisory review may not provide supervisory personnel with an image covering a large enough portion of theitem46 to ascertain the problem and determine appropriate corrective action. By capturing successive frames during the period that the operator keeps theimaging trigger28 depressed, the operator may move the portabledata collection device10 with respect to theitem46 to provide a video image of the complete item (or an image of as much of the item as necessary to provide for identification of the item and the item's problem).
For this embodiment, the process remains generally the same as the embodiment described in connection with FIG. 35. However, after the output of compressed data to the[0145]serial output circuitry296 atstep420, the control andselection circuitry284, atstep422, checks to see if a signal has been received from the imagecapture trigger circuitry28aindicating that the operator has released theimaging trigger28. If such a signal from the imagecapture trigger circuitry28ahas been received, then at424, the control andselection circuitry284 energizes thegreen LED indicator32 for a predetermined time period to signal the operator that the image in thetarget area44 has been successfully captured. Subsequently, theimaging assembly18 is turned off.
If no signal is received from the image[0146]capture trigger circuitry28aindicating that the operator has released theimaging trigger28, then the process loops back to step402 and successive image frames are captured, compressed and output to theserial output circuitry296 until such time as the control andselection circuitry284 received the signal from the imagecapture trigger circuitry28aindicating that theimaging trigger28 has been released.
As can best be seen in FIGS. 10 and 34, the[0147]imaging assembly18 includes thecamera assembly38 which is electrically coupled to the control anddecoder board22. The control anddecoder board22 includes themicroprocessor266 and associated circuitry. The circuitry of theimaging assembly18 may by embodied in software resident in one or more RAM or ROM memory chips430 (FIG. 5) mounted on the control anddecoder board22 and operated by themicroprocessor266. Alternately, the circuitry of theimaging assembly18 may comprise separate application-specific integrated circuitry (ASIC) mounted on the control anddecoder board22.
In the third operating embodiment of the portable[0148]data collection device10 of the present invention, the dataform decoding mode is actuated to capture, compress and output an image contained within the boundary of an image area associated with a dataform. For example, the desired image area may be a signature block positioned a predetermined distance from a dataform. In FIG. 33, a signature block432 is associated with a2D dataform434 known as MaxiCode (MaxiCode™ is a symbology standard of United Parcel Service). Thesignature block420 is positioned at a predetermined location with respect to thedataform434.
The[0149]dataform434 is imprinted on a label affixed to a package to be delivered to a recipient. When the package is delivered, the recipient signs his or hersignature436 within a perimeter of thesignature block420. To document delivery of the package, the portable data collectiondevice imaging assembly18 is actuated with thedataform reading trigger28 to image and decode thedataform434. However, in addition to decoding thedataform434, it would be desirable to store a portion of the captured image corresponding to the image within thesignature block320 to prove the recipient's acknowledgement of receipt of the package.
In the third operating embodiment, the[0150]imaging assembly18 will capture an image of thetarget area44 including both thedataform434 and thesignature block420. The output data sent to theserial output circuitry296 will include the decoded dataform and a compressed digital image of the image within thesignature block420, i.e., thesignature436.
FIG. 37 is a flowchart summarizing this third operating embodiment. At[0151]step500, theimaging assembly18 waits for the start of a dataform read session which is typically initiated by the operator pulling the dataform readingtrigger switch26. After imaging thetarget area44, atstep502, a frame of an image of thetarget area44 is captured and a digital representation is stored in theframe buffer memory274. The fuzzylogic control circuitry324 determines if the captured image frame is acceptable for decoding atstep504. If the frame is not acceptable, parameters are adjusted atstep506.
If the captured image frame is acceptable for decoding at step[0152]508, thedecoding circuitry292 attempts to decode cell data values associated with illumination intensity data values stored in theframe buffer memory274. Atstep510, if the cell data values are decodeable, then, atstep512, decode of thedataform434 occurs. Thesignature block420 is located at a predetermined position with respect to thedataform434, that is, the location, size and/or orientation of thesignature block420 with respect to thedataform434 is fixed. Data representative of the predetermined position may be encoded in the dataform or may be preprogrammed into the portable data collection device's application software. Also included in the dataform are certain distinguishing features that permit locating thedataform434 in the target area, for example, the “bulls eye” mark at the MaxiCode center.
Other dataform formats would include different distinguishing features such a guard bar for PDF-417 or Super Code dataforms or orientation markers for data matrix dataforms. As a result of the predetermined position data in conjunction with the distinguishing features of the dataform, the location, size and/or orientation of the[0153]signature block420 within thetarget area44 is determined atstep514, is determined. Atstep516, a digital representation of the portion of the image corresponding to thesignature block420 is coupled to theimage compression circuitry294 for data compression.
The compressed image data representing the[0154]signature block420 and at least a portion of the decoded dataform data are output to theserial output circuitry296, atstep518, for subsequent transmission by theradio module314 to a remote device. Atstep520, thegreen LED32 is energized for a predetermined time signaling to the operator that thedataform434 was successfully decoded and an image of thesignature block420 was successfully captured and output, to theserial output circuitry296. If the captured frame is not decodeable atstep510, thered LED30 is energized for a predetermined time to inform the operator that the read was unsuccessful and to maintain thedataform reading trigger26 depressed and keep thedata collection device10 aimed at thedataform434 until a successful read is obtained.
It should be appreciated that because the predetermined positional data for a desired image area such as a signature block located at a predetermined position with respect to a dataform may be preprogrammed into the portable data collection device, digital image data of a portion of the desired image area may be output without the necessity of decoding the dataform. After storing a digital representation of the[0155]target area44 and locating the distinguishing features of thedataform434, the location of thesignature block420 can be calculated based on the pre-programmed predetermined position data and the location of the distinguishing features of the dataform.
Regardless of whether predetermined positional data is preprogrammed into the[0156]data collection device10 or encoded in the dataform. There will be uses for thedevice10 this invention wherein only some of the codes will have associated desired image areas. Therefore, it is desirable for a dataform to include an indication as to whether there exists an associated desired image area to be captured and output. The indication may be encoded in the dataform or the dataform format itself may be the indication. For example, all MaxiCode formats may be known to have an associated desired image area which is to be captured and output.
In the signature block placement of FIG. 33, the block is centered below the[0157]dataform434 at a distance “g” from the dataform. The height of the block is H and the width is W. The dataform is of a predetermined size having a height “Y”. To locate thesignature block420 in thetarget field44, coordinate locations of the center (xc, yc) and the height of the dataform “Y” are determined in the pixel coordinate domain. Then, the formulas for calculating the positions of the four corners of the signature box in the pixel coordinate domain are as follows:
Upper-left corner: (x1−xc, yu−yc)=(−W/2, Y/2+g)
Upper-right corner: (xr−xc, yu−yc)=(W/2, Y/2+g)
Lower-left corner: (x1−xc, y1−yc)=(−W/2, Y/2+g+H)
Lower-right corner: (xr−xc, y1−yc)=(W/2, Y/2+g+H)
The formulas to correct each x or y value for angular rotation θ is as follows:[0158]
(x1)=(cos θ−sin θ) (x−xc)+(xc)
(y1)=(sin θ−cos θ) (y−yc)+(yc)
First Alternate Embodiment of Focusing Assembly of Optic AssemblyAn alternate embodiment of the focusing assembly is shown in FIG. 42 generally at[0159]900. Components that have the same structure as the corresponding components described with respect to the focusingassembly800 disclosed above will be assigned reference numbers followed by a prime (′) which are the same as corresponding reference numbers in the first embodiment. For example, the focusingoptic810′ of the focusing assembly900 is identical in structure and function as the focusingoptic810 of the focusingassembly800. The focusing assembly900 includes a focusingoptic810′ comprising two wedge shapedlens820′,830′ which are congruent in shape and supported in a lens support fixture (not shown but identical to thelens support fixture840 of the focusingassembly800 described above). Thelens820′ is moveable with respect to thelens830′ along a path of travel T′ to change the thickness t′ of the focusingoptic810′ as described above with respect tolens820,830 in the focusingassembly800.
The focusing assembly[0160]900 includes adrive assembly960 to move themoveable lens820′ along the path of travel T′. The drive means960 includes arod962 having oneend964 attached to abase surface829′ of thelens820′. Anopposite end966 of therod962 defines a pin968 extending vertically above and below upper and lower surfaces of therod962. A V-shaped pivoting member970 includes an oval shapedopening972 in an end portion of onearm974. The end portion of thearm974 is defines a slotted opening975 as can best be seen in FIG. 42A. Theend portion966 of therod962 fits within the slotted opening975 and pin968 slidingly fits within the oval shapedopening972 to pivotally secure therod962 to the V-shaped pivoting member970. Asecond arm976 of the V-shaped pivoting member970 includes apin978 which slides within a slot shapedopening979 in a rearward facing portion of aslider980. The V-shaped pivoting member970 is supported on avertical post982 extending from aninner surface16aof thehousing snout16. The V-shaped pivoting member970 pivots about apin984 extending vertically upwardly from thepost982.
A[0161]portion986 of theslider980 extends through a slottedopening988 in a side of thesnout16. Theslider980 is slidably confined between a pair ofparallel ledges990,992 which extend outwardly from theinner surface16aof thehousing snout16. The ledges havevertical lip portions990a,990b(FIG. 42A) to further confine theslider980. As theslider portion986 is moved along the slottedopening988 in a direction labeled R in FIG. 42, the V-shaped pivoting member970 pivots in a clockwise direction labeled C about thepin984. This causes therod962 to move in direction labeled D and also causes themoveable lens820′ to move in the direction D along its path of travel T′. Movement of thelens820′ in the direction D causes the thickness t′ of the focusingoptic810′ to increase.
Second Alternate Embodiment of Focusing Assembly of Optic AssemblyA second altenate embodiment of the focusing assembly is shown in FIG. 43 generally at[0162]1000. Again, components that have the same structure as the corresponding components described with respect to the focusingassemblies800,900 disclosed above will be assigned reference numbers followed by a double prime (″) which are the same as corresponding reference numbers in the first embodiment. For example, the focusingoptic810″ of the focusingassembly1000 is identical in structure and function as the focusingoptic810 of the focusingassembly800. The focusingassembly1000 includes a focusingoptic810″ comprisng two wedge shapedlens820″,830′″ which are congruent in shape and supported in a lens support fixture (not shown but identical to thelens support fixture840 of the focusingassembly800 described above). Thelens820′ is moveable with respect to thelens830′ along a path of travel T″ to change the thickness t″ of the focusingoptic810′ as described above with respect tolens820,830 in the focusingassembly800. The focusingassembly1000 includes adrive assembly1060 to move themoveable lens820″ along the path of travel T″. The drive means1060 includes astepper motor1062 having apinion gear1064 mounted to one end of themotor shaft1066. Arack1070 is coupled to anend829″ of themoveable lens820″. Adrive portion1072 of therack1070 includes linear gearing that meshes with thepinion gear1064 of thestepper motor1062. Therack1070 slides in a grooved portion of asupport1074 extending from themodular housing20. Thestepper motor1062 is configured to precisely rotate themotor shaft1066 in either the clockwise or counterclockwise directions in increments (or steps) of {fraction (1/36)}of a revolution (10 degree increments) thus providing precise control over the position of thelens820″ along its path of travel T″.
The[0163]stepper motor1064 is controlled by focusingcircuitry1080 mounted on the control and decoder board. As be seen schematically in FIG. 44A, the focusingcircuitry1080 receives input from theframe buffer memory274 an analyzes the sharpness of successive captured image frames. When the focusingcircuitry1080 determines that the sharpness of a captured frame has fallen below a predetermined value, the circuitry takes corrective action by moving actuating thestepper motor1062 and rotating theshaft1066 in 10 degrees increments such that therack1070 moves thelens820″ in a predetermined direction along its path of travel T″. Captured frames are continuously analysed by the focusing circuitry, when the sharpness of a captured image frame exceeds the predetermined value, the rotation of theshaft1066 is halted by the focusingcircuitry1080 and thelens820″ remains stationary so long as frame sharpness continues to exceed the predetermined image sharpness value.
If acceptable sharpness is not achieved by the time the[0164]lens820″ reaches an endpoint along it path of travel T″, the focusingcircuitry1080 reverses the rotation of thestepper motor shaft1064 and moves thelens820″ in a direction toward its opposite path of travel endpoint. When the sharpness of a captured image frame exceeds the predetermined image sharpness value, movement, rotation of theshaft1066 is halted as explained above.
Alternate Embodiment of Illumination AssemblyAn alternate embodiment of an illumination assembly suitable for use in the[0165]modular portion20 of theimaging assembly18 of the portabledata collection device10 is shown generally at800 in FIG. 19. The illumination assembly700 includes a printed circuit board assembly similar to the printedcircuit board assembly60 described above. For simplicity, the same reference numbers are used to identify components of the printed circuitry board assembly shown in FIG. 11 corresponding to the printedcircuit board assembly60 described above. Referring to FIG. 19, the printedcircuit board assembly60 includes a plurality of surface mountexposure illumination LEDs66. A single piece acrylic orpolycarbonate lens array802, fabricated, preferably, from PMMA is positioned between the printedcircuit board assembly60 and the target area44 (FIGS. 5 and 10) for directing the illumination from theexposure LEDs66 towards thetarget area44. Thelens array802 is similar to thelens array62 but provides for generation of a targeting illumination frame pattern FR (FIG. 32) which frames or surrounds the generated illumination crosshair pattern CR discussed in connection with thelens array702. As can be seen in FIG. 10 with respect to the previously describedlens array62, thelens array802 functions as a front panel for themodular portion20 of the imaging assembly. The printedcircuit board assembly60 includes printed conductors and apower lead112 operative for supplying power to theillumination LEDs66. A suitable surface mount illumination LED is produced by the MarkTech Corporation of Latham, N.Y., as Part No. MTSM735K-UR or MTSM745KA-UR. Eachillumination LED66 provides illuminosity of 285 milli candela (mcd) over an angular illumination field of about 68 degrees. The small footprint of eachillumination LED66 enables four LEDs to be placed in a row measuring less than 14 mm. The printedcircuit board assembly60 includes four banks of fourillumination LEDs66 totaling sixteen illumination LEDs providing 4560 mcd of uniform illumination over thetarget area44. Acentral opening67 in the printedcircuit board assembly60 provides an opening for theshroud58 to extend through.
The[0166]lens array802 includes fourillumination optic portions808a,808b,808c,808d(FIG. 39) which are identical in dimension and optic prescription to theillumination optic portions88a,88b,88c,88doflens array62. Each of theillumination optic portions808a,808b,808c,808dare aligned with a corresponding bank ofillumination LEDs66. Theillumination optic portions808a,808b,808c,808ddirect a 68 degree angular illumination field from eachillumination LED66 into a uniform field having an angular field of view which substantially corresponds to the angular field of view of theoptic assembly43 which defines thetarget area44.
Referring to FIGS. 24 and 26, which show a horizontal cross section (FIG. 24) and a vertical cross section (FIG. 26) through the
[0167]illumination optic portions808a,
808b,
808c,
808d, it can be seen that each optic portion comprises a lens including four vertically oriented cylindrical entry optic surfaces
816 extending from a back side
817 (FIG. 24) of the
lens array802. One vertically oriented
cylindrical entry surface816 is positioned in front of a corresponding
LED66. Each
optic portion808a,
808b,
808c,
808dalso includes a horizontally oriented cylindrical
optic exit surface818 extending from a front side
819 (FIG. 22) of the
lens array802. One horizontally oriented cylindrical exit
optic surface818 is positioned in front of each bank of four
LEDs66. The vertically oriented cylindrical entry optic surfaces
816 define the horizontal field of illumination and the horizontally oriented
cylinders818 define the vertical field of illumination. This arrangement provides an even illumination intensity distribution across the
target area44. The 4560 mcd of illumination provided by the
illumination LEDs66 will provide an illumination intensity in excess of 106 lux when the
target object45 is at a distance of 8.5 inches from the optic surface
90 of the
lens43a. The vertically oriented entry surfaces
816 have a radius of curvature of 2.50 mm. and a height I (FIG. 23) of 4.00 mm while the horizontally oriented exit surfaces
818 have a radius of curvature of 3.00 mm. and a width J (FIG. 24) of 13.75 mm. Referring to FIGS.
24-
26, suitable dimensions for the
lens array802 are as follows:
|
|
| Label | Description | Dimension |
|
| A | Height oflens array 802 | 21.75 mm. |
| B | Width oflens array 802 | 39.55 mm. |
| C | Diameter of center opening | 12.00 mm. |
| 820 of lens array 802 |
| D | Height between middle of | 14.13 mm. |
| vertical entry surfaces 816 |
| E | Thickness oflens array 802 | 1.95 mm. |
| F | Maximum extension of aspheric | 1.75 mm. |
| light exit surfaces 726, 730 |
| fromback side 717 of |
| lens array |
| G | Distance between maximum extension | 5.25 mm. |
| of aspheric light exit surfaces |
| 726, 730 and center of respective |
| segmented light exit surfaces 728, 732 |
| along centerlines T-T |
| H | Distance between centerlines T-T | 7.80 mm. |
| and outer edge of lens array 702 |
| I | Height of vertically oriented entry | 4.00 mm. |
| surfaces 816 |
| J | Width of horizontally oriented exit | 13.75 mm. |
| surfaces 718 |
|
Referring again to FIG. 19, the illumination assembly also includes a targeting arrangement or assembly to aid in aiming the[0168]device10 at thetarget object45. The targeting assembly includes the targetingLED illuminators64a,64b, which extend intoapertures68,70 in the printedcircuit board assembly60 and, when energized, project illumination into first and second targetingoptics822,824 respectively of thelens array62. The first and second targetingoptics822,824 are mirror images of each other and are identical in configuration.
As shown in FIG. 30, the targeting[0169]optic822 generates a crosshair pattern of illumination CR1 and a half frame FRI pattern of illumination. As shown in FIG. 31, the targetingoptic824 generates a crosshair pattern of illumination CR2 and a half frame pattern of illumination FR2. When thedevice10 is properly focused on thetarget object45 at the minimum best focus position MIN S2 of theoptic assembly43 and properly oriented such that thelens array802 is substantially parallel with thetarget object45, the crosshair patterns of illumination CR1, CR2 coincide or overlap to form a crosshair pattern of illumination CR, just like the crosshair pattern CR formed by thelens array702. As can be seen in FIG. 32, the crosshair pattern CR is characterized by a horizontal portion of width w (18 mm.), a vertical portion of height h (18 mm.) and a thickness of the pattern of illumination of t. Furthermore, the half frame patterns of illumination FRI, FR2 are configured as complementary halves of a rectangle which form a full frame pattern of illumination FR as shown in FIG. 32 which frames or surrounds the crosshair pattern CR. Like the crosshair pattern of illumination, the frame pattern of illumination FR is not a line but an illumination intensity pattern having a thickness represented in FIG. 32 by the distance labeled T. At the minimum best focus position MIN S2, the frame pattern of illumination FR has a vertical height of 60 mm. labeled H in FIG. 32 which is substantially equal to the height of thetarget area44 at the minimum best focus position MIN S2 and a horizontal span of 80 mm. labeled W in FIG. 32 which is substantially equal to the width of thetarget area44 at the minimum best focus position MIN S2.
The first and second targeting[0170]optics822,824, which are identical in configuration, are shown in cross section in FIGS. 25 and 26. Thefirst targeting optics822 comprises a lens with an aspherical lightentry optic surface826 and a segmented cylindrical lightexit optic surface828. Thesecond targeting optics824 comprises a lens with an aspherical lightentry optic surface830, similar to the aspherical lightentry optic surface826, and a segmented cylindrical lightexit optic surface832, similar to the segmented cylindrical lightexit optic surface828.
The aspherical entry surfaces[0171]826,830 each have a diameter of 8 mm., a radius of curvature of 2.890 mm. and a conic constant of −2.534. The segmented cylindrical light exit surfaces828,832 each have an 8.0 mm. by 8.0 mm. square shaped outer perimeter. The segmentedcylindrical surface828 is comprised of four triangular shapedsections840,842,844,846 (FIG. 22) while the segmentedcylindrical surface832 is divided into four triangular shapedsections850,852,854,856, whereinsections840 and850 are identical,sections842 and852 are identical,sections844 and854 are identical and846 and856 are identical.
The upper[0172]triangular section840 comprises a vertically oriented cylindrical light exit optic surface with a triangular shapedcorner region860 having a horizontally oriented cylindrical light exit optic surface (radius of curvature 25.00 mm.) in the upper left hand corner as seen in FIG. 22. The vertically oriented cylindrical light exit optic surface of the upper triangular section840 (not including the corner region860) is similar in optic configuration to uppertriangular section740 described above.
The lower[0173]triangular section844 also comprises a vertically oriented cylindrical light exit optic surface with a triangular shapedcorner region864 having a horizontally oriented cylindrical light exit optic surface (radius of curvature 25.00 mm.) in the lower left hand corner as seen in FIG. 22. The vertically oriented cylindrical light exit optic surface of the lower triangular section844 (not including the corner region864) is similar in optic configuration to lowertriangular section744 described above.
The right[0174]triangular section842 comprises a horizontally oriented cylindrical light exit optic surface and is similar in optic configuration to the righttriangular section742 discussed above. The lefttriangular section846 comprises a horizontally oriented cylindrical light exit optic surface with first and secondtriangular regions866,867. The horizontally oriented cylindrical light exit optic surface of the left triangular section846 (not including thecorner regions866,867) is similar in optic configuration to the lefttriangular section746 discussed above. Thetriangular region866 is adjacenttriangular corner region860 and comprises a vertically oriented cylindrical light exit optic surface (radius of curvature 25.00 mm.). The triangular region867 is adjacenttriangular corner region864 and comprises a vertically oriented cylindrical light exit optic surface (radius of curvature 25.00 mm.).
The upper[0175]triangular section850 comprises a vertically oriented cylindrical light exit optic surface with a triangular shapedcorner region870 having a horizontally oriented cylindrical light exit optic surface (radius of curvature 25.00 mm.) in the upper right hand corner as seen in FIG. 22. The vertically oriented cylindrical light exit optic surface of the upper triangular section850 (not including the corner region870) is similar in optic configuration to uppertriangular section750 described above.
The lower[0176]triangular section854 also comprises a vertically oriented cylindrical light exit optic surface with a triangular shapedcorner region874 having a horizontally oriented cylindrical light exit optic surface (radius of curvature 25.00 mm.) in the lower right hand corner as seen in FIG. 22. The vertically oriented cylindrical light exit optic surface of the lower triangular section854 (not including the corner region874) is similar in optic configuration to lowertriangular section754 described above.
The left[0177]triangular section852 comprises a horizontally oriented cylindrical light exit optic surface and is similar in optic configuration to the lefttriangular section752 discussed above. The righttriangular section856 comprises a horizontally oriented cylindrical light exit optic surface with first and secondtriangular regions876,877. The horizontally oriented cylindrical light exit optic surface of the right triangular section856 (not including thecorner regions876,877) is similar in optic configuration to the righttriangular section756 discussed above. Thetriangular region876 is adjacenttriangular corner region870 and comprises a vertically oriented cylindrical light exit optic surface (radius of curvature 25.00 mm.). Thetriangular region877 is adjacenttriangular corner region874 and comprises a vertically oriented cylindrical light exit optic surface (radius of curvature 25.00 mm.).
The optic surfaces of the[0178]corner regions860,864,866,867 are tilted with respect to the optic surfaces of their correspondingtriangular sections840,844,846 such that illumination from the targetingLED64ais focused through the corner region optic surfaces to generate the half frame illumination pattern FRI. Similarly, the optic surfaces of thecorner regions870,874,876,877 are tilted with respect to their correspondingtriangular sections850,854,856 such that illumination from the targetingLED64bis focused through the corner region optic surfaces to generate the half frame illumination pattern FR2. The tilt angles ofcorner regions860 and866 will be examined. The same tilt angles are correspondingly used for all the other corner regions and the discussion will not be repeated for each region.
Prior to discussing the tilt angles of the[0179]corner regions860,864, it is important to note that the light exit optic surfaces of thetriangular sections840,842,844,846,850,852,854,856 have optical surfaces with the angle of tilt (9.85 degrees) discussed in detail with respect to thelens array702 above. Thus, thetriangular sections840 and846 have optical surfaces with a 9.85 degree angle of tilt downwardly (as viewed in FIG. 25) toward thefront side819 of thelens array802.
The[0180]corner regions860,866 and thetriangular sections840,846 are symmetric about the diagonal line880. As can best be seen in FIG. 25A, the optical surfaces of thecorner regions860,866 are tilted at an angle labeled d of 11.50 degrees with respect to horizontal axis (axis L-L). The tilt angle of the corner regions is opposite of the tilt angle of thetriangular sections840,846.
At the minimum best focus position MIN S[0181]2 of 140 mm. (5.5 inches) in front of the optic surface90, the frame illumination pattern FR has an approximate height h of 60 mm. (2.4 in.) and an approximate width w of 80 mm. (3.2 in.) which corresponds to the dimensions of thetarget area44 at the best focus position S2. At the minimum best focus position MIN S2, thetarget area44 has a height of 62 mm. (2.4 in.) and a width of 82 mm. (3.2 in.). As was the case in the illumination assembly embodiment including thelens array702, the crosshair illumination pattern CR has a height of 18 mm. and a width of 18 mm. at the minimum best focus position MIN S2.
While the description has described the currently preferred embodiments of the invention, those skilled in the art will recognize that other modifications may be made without departing from the invention and it is intended to claim all modifications and variations as fall within the scope of the invention.[0182]
In compliance with the statute, the invention has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the invention is not limited to the specific features shown and described, since the means herein disclose comprise preferred forms of putting the invention into effect. The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims appropriately interpreted in accordance with the doctrine of equivalents.[0183]