AUTOMATED FOCUS EVALUATION OF HISTOLOGICAL IMAGES
FELD OF THE INVENTION
The present disclosure relates to automated focus evaluation of histological images and, more particularly, to systems and methods for using and for generating labeled data suitable for training computer-implemented machine-learning models to evaluate whether histological images are in focus.
BACKGROUND
Histological image scanners are often equipped with high-resolution image sensors capable of capturing digital images of entire histological slides. Histological slides digitized in this manner can be saved for later review and inspection by a human operator. Histological scanners can also use an objective lens to increase the magnification of images captured of histological slides.
SUMMARY
An example of a method of automated slide image focus evaluation includes receiving an image scan of a histological slide, segmenting the image scan into a plurality of regions, generating a focus score for each region of the plurality of regions using a trained computer-implemented machine-learning model, determining a number of regions of the plurality of regions having focus scores that satisfy a threshold focus score, generating a determination of whether the image scan is acceptable for diagnostic use based on whether the number of regions satisfies a threshold number of regions, and generating an indication of the determination of whether the image scan is acceptable for diagnostic use. The trained computer-implemented machine-learning mode is configured to accept image data as an input and to output a value describing an extent to which the image data is in focus.
An example of a system for focus evaluation of a histological slide includes a histological image scanner, a processor operatively connected to the histological image scanner, and at least one computer-readable memory. The at least one computer-readable memory is encoded with instructions that, when executed, cause the processor to receive an image scan of a histological slide from the histological image scanner, generate a focus score for each region of the plurality of regions using a trained computer-implemented machine-learning model, determine a number of regions of the plurality of regions having focus scores that satisfy a threshold focus score, generate a determination of whether the image scan is acceptable for diagnostic use based on whether the number of regions satisfies a threshold number of region, and generate an indication of the determination. The trained computer-implemented machine-learning mode is configured to accept image data as an input and to output a value describing an extent to which the image data is in focus.
A further example of a method of automated slide image focus evaluation of histological images includes adjusting a distance between a lens and a stage a lens and a stage on which a histological slide is placed to a predicted in- focus distance, capturing an image scan of the histological slide while the distance between the lens and the stage is the predicted in-focus distance, segmenting the image scan into a plurality of regions, generating a focus score for each region of the plurality of regions using a trained computer- implemented machine-learning model, determining a number of regions of the plurality of regions having focus scores that satisfy a threshold focus score, generating a determination whether the image scan is acceptable for diagnostic use based on whether the number of regions satisfies a threshold number of regions, and generating an indication of the determination of whether the image scan is acceptable for diagnostic use. The predicted infocus distance is generated by an autofocusing algorithm and the trained computer- implemented machine-learning mode is configured to accept image data as an input and to output a value describing an extent to which the image data is in focus.
The present summary is provided only by way of example, and not limitation. Other aspects of the present disclosure will be appreciated in view of the entirety of the present disclosure, including the entire text, claims, and accompanying figures.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a front view of an image analysis system with an expanded schematic diagram illustrating a system for training and using computer-implemented machine-learning models for focus evaluation of histological images.
FIG. 2 is a schematic diagram illustrating the system of FIG. 1 with components of an image scanner shown in greater detail.
FIG. 3 is a schematic diagram of an example of focus evaluation workflow of a histological image performable by the system of FIGS. 1-2.
FIG. 4 is a schematic diagram of a further example of focus evaluation workflow of a histological image performable by the system of FIGS. 1-2.
FIG. 5 is a schematic diagram of yet a further example of focus evaluation workflow of a histological image performable by the system of FIGS. 1-2.
FIG. 6 is a flow diagram of an example of a method of focus evaluation of a histological images performable by the system of FIGS. 1-2.  FIG. 7A is a schematic diagram of an example of z-stacked images generated by the system of FIGS. 1-2.
FIG. 7B is a schematic diagram of an example of an image region sequence derived from the z-stacked images of FIG. 7A.
FIG. 8 is an example of a graph of normalized focus scores generated by the system of FIGS. 1-2.
FIG. 9 is a flow diagram of an example of a method of generating training data for training a computer-implemented machine-learning model to perform focus evaluation and that is performable by the system of FIGS. 1-2.
FIG. 10 is a flow diagram of an example of a method of generating composite images for training computer-implemented machine-learning models using normalized focus scores generated by the method of FIG. 9 and that is performable by the system of FIGS. 1-2.
FIG. 11 is a flow diagram of an example of a method of training a computer- implemented machine-learning model performable by the system of FIGS. 1-2 and suitable for use with labeled data generated by the method of FIG. 9.
FIG. 12A is a block diagram of an example of a line scan camera having a single linear array.
FIG. 12B is a block diagram of an example of a line scan camera having a charge coupled device (CCD) array.
FIG. 12C is a block diagram of an example of a line scan camera having a time delay integration (TDI) array.
While the above-identified figures set forth one or more examples of the present disclosure, other examples are also contemplated, as noted in the discussion. In all cases, this disclosure presents the invention by way of representation and not limitation. It should be understood that numerous other modifications and examples can be devised by those skilled in the art, which fall within the scope and spirit of the principles of the invention. The figures may not be drawn to scale, and applications and examples of the present invention may include features and components not specifically shown in the drawings.
DETAILED DESCRIPTION
The present disclosure relates to systems and methods for analyzing the focus of images, generating image data labeled with focus information, and training computer-implemented machine-learning models to predict the relative focus of image data using labeled image data generated according to the present disclosure. Histological image scanners often perform focusing using auto-focusing algorithms that can misidentify the correct focal distance for capturing an in-focus image of a histological slide. Further, human operators can also misidentify the correct focal distance for imaging a histological slide when manually selecting the focal distance (i.e., by manually setting the distance between the imaging objective lens and the histological slide). Out-of-focus histological images can cause diagnostic errors in examples where the out-of-focus image is not detected or otherwise identified as out of focus. Further, it can be difficult for human operators to detect out-of-focus histological images by visual inspection alone.
The systems and methods disclosed herein enable improved and automated focus evaluation of images captured using a histological image scanner where the initial focusing was performed using, for example, an auto-focusing algorithm, operator judgment, etc. As will be described in more detail subsequently, the automated focus evaluation detailed herein processes images into regions that are suitable for scoring by a computer-implemented machine-learning algorithm. Each image region can be compared against a first threshold to determine if the region is in focus, and then the numerosity or relative numerosity of in-focus image regions can be compared against a second threshold to determine if the image as a whole is sufficiently in-focus to enable to image to be suitable for diagnostic purposes (e.g., histopathology). As will be explained in more detail subsequently, the processing of histological images into smaller image regions, the use of computer-implemented machine-learning models for focus scoring, and the use of dual thresholds to determine whole-image focus confers a number of advantages over existing methods of focus evaluation.
The systems and methods disclosed herein also enable the use of one or more sets of Z-stacked images (a type of focus-stacked image set described in more detail subsequently) to create training data. As will be explained in more detail subsequently, the use of Z-stacked images by the methods and systems disclosed herein allow for the creation of training data that includes real blur rather than artificially-generated blur used by existing methods of generating training data for focus evaluation. Artificially-generated blur is typically generated using blurring algorithms (such as Gaussian blurring algorithms) that can be dissimilar from blur in real out-of-focus histological images, potentially biasing and reducing the accuracy of machine-learning models trained using artificially-blurred image data. Further and as will also be explained in more detail subsequently, the use of Z-stacked image data allows for normalization to create focus scores that vary between known values, improving the accuracy of predictions made by computer-implemented machine-learning models trained with labeled images generated according to the present disclosure.
FIGS. 1-2 are schematic diagrams of image analysis system 10, which is a system for training and using computer-implemented machine-learning models to evaluate the focus of histological images. Image analysis system 10 includes system controller 100, user interface 106, image scanner 160, and sample carousel 170. FIG. 1 illustrates elements of system controller 100, user interface 106, and sample carousel 170 in detail. FIG. 2 illustrates elements of image scanner 160 in detail. FIGS. 1-2 are discussed together herein.
System controller 100 includes processor 102 and memory 104, and further includes, is operatively connected to, and/or is otherwise in electronic communication with user interface 106. Memory 104 stores segmentation module 110, tissue detection module 120, focus scoring module 130, image collection module 140, and data labeling module 150. Sample carousel 170 includes slides 171. Image scanner 160 is configured to receive a glass slide (e.g., one of glass slides 171) and includes communication bus 172, motion controller 174, interface system 176, stage 178, sample 182, illumination system 184, objective lens 186, optical path 188, focusing optics 190, line scan camera 192, camera 194, field of view 196, objective lens positioner 198, stage positioner 199, and epiillumination system 200. In the depicted example, image scanner 160 receives glass slide 171 A. FIG. 2 also depicts arrows 197, which indicate the general direction which light travels from sample 182 to line scan camera 192 and/or camera 194, as well as arrow X, arrow Y, and arrow Z, which indicate X-coordinate, Y-coordinate, and Z-coordinate directions, respectively.
System controller 100 is operatively connected to and/or is otherwise in electronic communication with image scanner 160 and sample carousel 170, such that system controller 100 is able to control operation of image scanner 160 and sample carousel 170. As will be explained in more detail subsequently, system controller 100 can cause sample carousel 170 to load slides (e.g., glass slide 171 A depicted in FIG. 2) onto stage 178 and to unload slides from stage 178 to sample carousel 170. Further, and as will also be explained in more detail subsequently, system controller 100 can also cause image scanner 160 to image slides located on stage 178.
Processor 102 can execute software, applications, and/or programs stored on memory 104. Examples of processor 102 can include one or more of a processor, a microprocessor, a controller, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or other equivalent discrete or integrated logic circuitry. Processor 102 can be entirely or partially mounted on one or more circuit boards.
Memory 104 is configured to store information and, in some examples, can be described as a computer-readable storage medium. Memory 104, in some examples, is described as computer-readable storage media. In some examples, a computer-readable storage medium can include a non-transitory medium. The term “non-transitory” can indicate that the storage medium is not embodied in a carrier wave or a propagated signal. In certain examples, a non-transitory storage medium can store data that can, over time, change (e.g., in RAM or cache). In some examples, memory 104 is a temporary memory. As used herein, a temporary memory refers to a memory having a primary purpose that is not long-term storage. Memory 104, in some examples, is described as volatile memory. As used herein, a volatile memory refers to a memory that that the memory does not maintain stored contents when power to the memory 104 is turned off. Examples of volatile memories can include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories. In some examples, memory 104 is used to store program instructions for execution by processor 102. Memory 104, in one example, is used by software or applications running on system controller 100 (e.g., by a computer-implemented machine-learning model) to temporarily store information during program execution.
Memory 104, in some examples, also includes one or more computer- readable storage media. The storage media can be configured to store larger amounts of information than volatile memory and, further, can be configured for long-term storage of information. In some examples, memory 104 includes non-volatile storage elements. Examples of such non-volatile storage elements can include, for example, magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.
User interface 106 is an input and/or output device and/or software interface, and enables an operator to control operation of and/or interact with software elements of image analysis system 10, and the components thereof. For example, user interface 106 can be configured to receive inputs from an operator and/or provide outputs. User interface 106 can include one or more of a sound card, a video graphics card, a speaker, a display device (such as a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, etc.), a touchscreen, a keyboard, a mouse, a joystick, or other type of device for facilitating input and/or output of information in a form understandable to users and/or machines.
In the depicted example, user interface 106 is mounted to a front of the body of image analysis system 10 adjacent to sample carousel 170. In other examples, user interface 106 can be positioned elsewhere on image analysis system 10, including on separate hardware such as a remote device. Further, in the depicted example, user interface 106 includes a graphical display including one or more icons that a user can select or otherwise interact with (e.g., via a pointer, via a touchscreen interface, etc.) to control the operation of image analysis system 10.
In some examples, system controller 100 can operate an application programming interface (API) (e.g., as a software component of user interface 106 or as another software component of system controller 100) for facilitating communication between system controller 100 and other components of image analysis system 10 and/or for facilitating communication between image analysis system 10 and other devices connected to image analysis system 10.
Image scanner 160 is a digital imaging device for imaging histological samples. In some examples, image scanner can be referred to as a scanner system, a scanning system, a scanning apparatus, a digital scanning apparatus, or a digital slide scanning apparatus. Image scanner 160 can be used to create image scans of histological slides for use by the programs of system controller 100.
The electronic devices and/or components of image scanner 160 are communicatively connected via communication bus 172. In particular, communication bus 172 connects to system controller 100, motion controller 174, interface system 176, illumination system 184, line scan camera 192, camera 194, objective lens positioner 198, stage positioner 199, and epi-illumination system 200. Communication bus 172 can include one or more physical ports, physical connectors, etc. and/or one or more digital interfaces, digital connectors, etc. for facilitating communication between devices, components, etc. connected to communication bus 172. Communication bus 172 can be configured to convey analog electrical signals and/or to convey digital data. Accordingly, electronic communications mediated by communication bus 172 may include both electrical signals and digital data. In some examples, communication bus 172 can also include one or more components for wireless communication, such that one or more of system controller 100, motion controller 174, interface system 176, illumination system 184, line scan camera 192, camera 194, objective lens positioner 198, stage positioner 199, and epi-illumination system 200 are connected to communication bus 172 via a wireless connection. Image scanner 160 is depicted as including a single communication bus, but in other examples, image scanner 160 can include any suitable number of communication busses for communicatively coupling the electronic devices and/or components of image scanner 160.
Motion controller 174 is configured to precisely control and coordinate X, Y, and/or Z movement of stage 178 (via stage positioner 199) and/or objective lens 186 (via objective lens positioner 198). Motion controller 174 can also be configured to control movement of any other moving part of image analysis system 10. For example, where image scanner 160 is a fluorescence scanner, motion controller 174 can be configured to coordinate movement of optical filters and related structures of epi-illumination system 200.
Interface system 176 includes one or more software and/or hardware elements for facilitating communication and data transfer between the image analysis system 10 and one or more external devices that are connected to image analysis system 10. The external devices can include, for example, one or more external devices that are directly connected to image analysis system 10 or any components thereof (e.g., a printer, removable storage medium, etc.) and/or one or more external devices that are connected via a network to image analysis system 10 or any components thereof (e.g., an image server system, an operator station, a user station, an administrative server system, etc.).
While system controller 100, user interface 106, motion controller 174, and interface system 176 are shown as separate components of image analysis system 10 in FIGS. 1-2, in other examples, two or more of system controller 100, user interface 106, motion controller 174, and interface system 176 can be integrated into a single component and/or device. Further, while system controller 100, user interface 106, motion controller 174, and interface system 176 are each shown as individual devices in FIGS. 1-2, in other examples, any of system controller 100, user interface 106, motion controller 174, and interface system 176 and/or any combination thereof can be distributed and/or virtualized across any suitable number of devices. Similarly, while image analysis system 10 is generally depicted as a single device herein, in other examples, the components of image analysis system 10 can be distributed and/or virtualized across any suitable number of systems and/or devices.
Sample carousel 170 is a carousel that stores glass slides 171. Sample carousel 170 includes robotic components for loading slides onto and unloading slides from stage 178, and enables image analysis system 10 to capture images of multiple samples without intervention or input from a human operator. Operation of sample carousel 170 can be controlled by, for example, system controller 100 and/or motion controller 174.
Glass slides 171 depicted in FIG. 1 are substrates on which histological samples are mounted, placed, etc. FIG. 2 depicts glass slide 171A, which is one of glass slides 171, to which sample 182 is mounted. Samples can be mounted to glass slides 171 using any suitable technique (e.g., wet mounting, dry mounting, etc.). In at least some examples, the samples mounted to glass slides 171 include tissue samples for analysis via histopathology (e.g., formalin-fixed paraffin-embedded samples and/or microtome sections thereof). In at least some examples, one or more of glass slides 171 can include a cover slip. While glass slides 171 are generally discussed herein as being glass, in other examples, glass slides 171 can be made from any other suitable material for imaging by image scanner 160. Glass slide 171 A extends substantially X-coordinate and Y-coordinate directions, and has a thickness in the Z-coordinate direction.
Sample 182 is a specimen for interrogation by optical microscopy using image scanner 160. Sample 182 can be or include, for example, tissue, cells, chromosomes, deoxyribonucleic acids (DNA), protein, blood, bone marrow, urine, bacteria, beads, biopsy materials, or any other type of biological material or substance that is either dead or alive, stained or unstained, labeled or unlabeled, etc. Other examples of sample 182 include integrated circuit boards, electrophoresis records, petri dishes, film, semiconductor materials, forensic materials, machined parts, or any combination thereof. In some examples, sample 182 can include or be deposited on a substrate other than a glass slide (e.g., glass slide 171A). In some examples, sample 182 is stained with haematoxylin and eosin stain (“H&E stain”).
Illumination system 184 is configured to illuminate at least a portion of sample 182. Illumination system 184 may include, for example, a light source and illumination optics. As a specific example, the light source can be a variable intensity halogen light source with a concave reflective mirror to maximize light output and a KG- 1 filter to suppress heat. The light source can also be, for example, any type of arc-lamp, laser, or other source of light. In the depicted embodiment, illumination system 184 illuminates sample 182 in transmission mode such that line scan camera 192 and/or camera 194 sense optical energy that is transmitted through sample 182. Alternatively, or in combination, illumination system 184 may also be configured to illuminate sample 182 in reflection mode such that the line scan camera 192 and/or camera 194 sense optical energy that is reflected from sample 182. More generally, illumination system 184 can be configured for interrogation of sample 182 and/or any other sample using any known mode of optical microscopy.
Epi-illumination system 200 is an optional component of image scanner 160 and is included in examples where it is advantageous to image sample 182 and/or any other suitable sample using epi-illumination, such as in examples where image scanner 160 is used for fluorescence scanning. Fluorescence scanning is the scanning of samples that include fluorescent molecules (i.e., fluorophores), which are photon-sensitive molecules that can absorb light at a specific wavelength (excitation). These photon-sensitive molecules also emit light at a higher wavelength (emission). Because the efficiency of this photoluminescence phenomenon is very low, the amount of emitted light is often very low. This low amount of emitted light can frustrate conventional techniques for scanning and digitizing fluorophore-containing samples (e.g., transmission mode microscopy). In examples of image scanner 160 where image scanner 160 includes epi-illumination system 200 and is used to image fluorophore-containing samples, line scan camera 192 can advantageously include a time delay integration (TDI) line scan camera to increase sensitivity of line scan camera 192 and improve capture of faint fluorophores having low emitted light. An example of a TDI line scan camera suitable for use in line scan camera 192 is described subsequently in the discussion of FIG. 12C. Advantageously, TDI sensors provide a substantially better signal-to-noise ratio (“SNR”) in the output signal than other types of line scan camera sensors by summing intensity data from previously imaged regions of a specimen, yielding an increase in the SNR that is in proportion to the squareroot of the number of integration stages.
In examples where image scanner 160 is a fluorescence scanner system, line scan camera 192 can be a monochrome TDI line scan camera. Monochrome images are advantageous in fluorescence microscopy because they provide a more accurate representation of the actual signals from the various channels present on the sample. As will be understood by those skilled in the art, fluorophore-containing samples can be labeled with multiple fluorophores that emit light at different wavelengths. Each emission wavelength can be referred to as a “channel.” In at least some examples, line scan camera 192 is or includes a monochrome 10-bit 64-linear-array TDI line scan camera.
Stage 178 is configured to receive and support a glass slide, such as one of glass slides 171. In the depicted example, stage 178 receives and supports glass slide 171A. Stage 178 is movable by stage positioner 199 and can be moved in the X-coordinate and Y-coordinate directions indicated by arrows X and Y, respectively (i.e., along the X-axis and Y-axis, respectively, of the coordinate system defined by arrows X, Y, and Z). In some examples, stage positioner 199 can also move stage 178 in the Z-coordinate direction to adjust or change the distance between stage 178 and objective lens 186, thereby adjusting or changing the distance between objective lens 186 and sample 182. Adjusting the Z- coordinate distance between objective lens 186 and sample 182 adjusts the position of the focal plane of objective lens 186 relative to sample 182 and can, accordingly, be used to adjust the focus of images of sample 182 captured by image scanner 160 (i.e., via line scan camera 192 and/or camera 194). Stage positioner 199 includes one or more motors for controlling movement of stage 178 and, as described previously, operation of stage positioner 199 can be controlled by system controller 100 (e.g., processor 102) and/or motion controller 174.
Stage positioner 199 can also be configured to accelerate stage 178 (and, as such, sample 182 mounted to glass slide 171 A) in a scanning direction to a substantially constant velocity, and then maintain the substantially constant velocity during image data capture by the line scan camera 192. The “scanning direction” in which stage 178 is moved is defined by the orientation of sensor components of line scan camera 192 and is perpendicular or substantially perpendicular to the direction in which the sensor elements of line scan camera 192 extend. In at least some examples, stage positioner 199 can move stage 178 in the X-coordinate and Y-coordinate directions according to positions defined by an X-Y coordinate grid. Further, in at least some examples, stage positioner 199 is or includes a linear actuator with an encoder or encoders capable of reporting the position of stage 178 in both X-coordinate and Y-coordinate directions to at least nanometer precision. For example, encoders sensitive to translation along the X- and Y- axes can monitor position along scanning directions, while encoders sensitive to translation along the Z-axis can monitor adjustments to objective distance.
Objective lens 186 is an optical element of image scanner 160 that gathers and focuses light from sample 182. Specifically, objective lens 186 collects and focuses light from field of view 196. Objective lens 186 can be a single optical element, such as a single lens or mirror, or can include multiple optical elements, such as one or more lenses and/or mirrors. In some examples, objective lens 186 can be referred to as an “objective lens array” or an “objective array.” In some examples, the objective lens 186 is an achromatic, apochromatic (“APO”), or semi-APO infinity plan corrected objective with a numerical aperture corresponding to the highest spatial resolution desirable, where the objective lens 186 is suitable for transmission-mode illumination microscopy, reflection- mode illumination microscopy, and/or epi-illumination-mode fluorescence microscopy (e.g., an Olympus 40x, 0.75 NA or 20x, 0.75 NA). Advantageously, objective lens 186 is capable of correcting for chromatic and spherical aberrations. In examples where objective lens 186 is infinity corrected, focusing optics 190 can be placed in the optical path 188 above the objective lens 186 where the light beam passing through the objective lens 186 becomes a collimated light beam.
Focusing optics 190 focus the optical signal captured by the objective lens 186 onto the light-responsive elements of line scan camera 192 and/or camera 194, and can optionally include optical components such as filters, magnification changer lenses, etc. Objective lens 186, combined with focusing optics 190, provides the total magnification for image scanner 160. In at least one example, focusing optics 190 may contain a tube lens and an optional 2x magnification changer, thereby allowing a native 20x objective lens to scan samples at 40x magnification.
Objective lens 186 is mounted on or otherwise mechanically linked to objective lens positioner 198, such that objective lens positioner 198 can be used to control the position of objective lens 186 relative to stage 178. Objective lens positioner 198 includes one or more motors for controlling the position of objective lens 186 and, as described previously, operation of objective lens positioner 198 can be controlled by system controller 100 (e.g., processor 102) and/or motion controller 174. Objective lens positioner 198 is configured to adjust the Z-coordinate position of objective lens 186 and, in some examples, can also be configured to adjust the X-coordinate position and/or the Y- coordinate position of objective lens 186. In some examples, objective lens positioner 198 can include a very precise linear motor to move the objective lens 186 along the optical axis defined by objective lens 186 (i.e., in the Z-coordinate direction in FIG. 2).
Objective lens positioner 198 and stage positioner 199 are each optional elements of image scanner 160. However, in all examples, image scanner 160 includes at least one of objective lens positioner 198 or stage positioner 199 to enable translation of field of view 196 in the X-coordinate and Y-coordinate directions, and to adjust the Z- distance between stage 178 (and therefore any sample mounted to a slide held or placed on stage 178) and objective lens 186. The Z-location of either or both of objective lens 186 and stage 178 can be adjusted (i.e., via objective lens positioner 198 and stage positioner 199) to adjust the distance between stage 178 and objective lens 186. Adjusting the Z- distance between stage 178 and objective lens 186 adjusts the Z-position of the focal plane of objective lens 186 in sample 182 and, accordingly, can be used to focus the image(s) collected by image scanner 160.
Line scan camera 192 is an imaging sensor for imaging samples located on stage 178, such as sample 182 mounted to glass slide 171A placed on stage 178. Line scan camera 192 includes at least one linear array of sensor elements (“pixels”) and can capture image data in either monochrome or color. Monochrome line scan cameras can include one or more linear arrays and color line scan cameras can have three or more linear arrays. For example, line scan camera 192 can be a three linear array (“red-green-blue” or “RGB”) color line scan camera. More generally, line scan camera 192 can include any type of singular or plural linear array, whether packaged as part of a camera or custom-integrated into an imaging electronic module. For example, line scan camera 192 can also be a TDI camera including 24, 32, 48, 64, 96, or any other suitable number of arrays. The array(s) of line scan camera 192 can have any suitable number of pixels, including but not limited to 512 pixels, 1024 pixels, and 4096 pixels. In all examples, motion controller 174 and/or system controller 100 can synchronize the motion of stage 178 (i.e., via control of stage positioner 199) with the line rate of line scan camera 192 to enable image scanning of sample 182. Image data generated by line scan camera 192 can be stored to memory 104 or any other suitable computer-readable memory during capture and used to generate a contiguous digital image of at least a portion of sample 182.
Camera 194 is also an imaging sensor for imaging samples located on stage 178, such as sample 182 mounted to glass slide 171A. Camera 194 can be an “area scan” camera including a matrix of sensor elements (e.g., “pixels”) or a line scan camera substantially similar to line scan camera 192. Like line scan camera 192, camera 194 can be a monochrome or color camera such that camera 194 can capture monochromatic or color images of samples on stage 178. Where camera 194 is an area scan camera, the matrix of camera 194 can have any suitable number of pixels in each dimension of the matrix.
Line scan camera 192 and camera 194 are each optional elements of image scanner 160. However, in all examples, image scanner includes at least one of line scan camera 192 or camera 194. In some examples, at least one of line scan camera 192 or camera 194 functions as a focusing sensor that operates in combination with the other of line scan camera 192 and camera 194. For example, camera 194 can be an additional line scan camera that functions as a focusing sensor for line scan camera 192. Imaging data from camera 194 acting as a focusing sensor, or from a separate focusing sensor, can be stored to memory 104 or another suitable computer-readable memory and used by processor 102 to adjust the distance between objective lens 186 and stage 178 (i.e., via one or both of objective lens positioner 198 and stage positioner 199) such that image scanner 160 can collect an in-focus image of the sample. The focusing sensor (e.g., camera 194, line scan camera 192, etc.) can be positioned on the same optical axis as the imaging sensor and/or can be positioned before or after the imaging sensor with respect to the scanning direction of the image scanner 160.
The process of collecting and recording image data to generate a digital image of at least a portion of a sample using line scan camera 192 and/or camera 194 can be referred to as “capturing” an image or a digital image of the sample or sample portion. Images captured by line scan camera 192, camera 194, or any combination thereof can be stored to memory 104 for further processing and use by the program(s) of system controller 100, as described in more detail subsequently. Image scanner 160 can be configured to capture an image of an entire slide and/or the entire sample mounted to the slide (a “wholeslide image”) and/or to capture less than the entire sample mounted to a slide.
In operation, glass slide 171A is loaded onto stage 178 by a human operator, by sample carousel 170, and/or any other suitable robotics element. Once glass slide 171 A is placed on stage 178, sample 182 is illuminated by illumination system 184, epiillumination system 200, or a combination thereof. Light reflecting off of or transmitting through sample 182 illuminates a portion thereof within field of view 196 for image capture. Light from the portion of sample 182 within field of view 196 travels along optical path 188 according to arrows 197 to either or both of line scan camera 192 and camera 194. The light excites sensor elements of line scan camera 192 and/or camera 194 and the resultant digital image data is stored to memory 104 of system controller 100 or another suitable computer-readable memory. System controller 100, motion controller 174, and/or a combination thereof can be used to perform auto-focusing using one or more autofocusing algorithms stored to memory 104 or another suitable computer-readable memory. The auto-focusing algorithm(s) can be part of image collection module 140 or any other suitable software module. As described previously, autofocusing can be performed using the imaging camera (i.e., of line scan camera 192 and camera 194) or using a separate camera specific for focusing. The relative Z-position of objective lens 186 and stage 178 is then adjusted (i.e., via one or both of objective lens positioner 198 and stage positioner 199) to move the focal plane of objective lens 186 to a position predicted to generate an in-focus image of sample 182 by the auto-focusing algorithm(s). An image scan of sample 182 can then be captured by image scanner 160. Auto-focusing and image capture are described as separate processes herein for explanatory clarity and convenience, and it is to be understood that auto-focusing and image capture can occur simultaneously and/or substantially simultaneously, and that in some examples, auto-focusing can be continuously or repeated performed as field of view 196 is moved across sample 182 (i.e., via X-coordinate and/or Y-coordinate translation caused by objective lens positioner 198 and/or stage positioner 199).
In examples where line scan camera 192 is used to image sample 182, the various components of image analysis system 10, and in particular of image scanner 160, enable automatic scanning and digitizing of sample 182. Glass slide 171A is securely placed on stage 178 of image scanner 160 to scan sample 182. In an example where stage 178 is moved and objective lens 186 is held in a static position, under control of system controller 100 and/or motion controller 174, stage 178 accelerates sample 182 to a substantially constant velocity for sensing by line scan camera 192, such that the speed of stage 178 is synchronized with the line rate of line scan camera 192. In examples where the width of field of view 196 and/or of line scan camera 192 is less than the width of the X- coordinate or Y-coordinate dimension of sample 182 and/or glass slide 171, image scanner 160 can image sample 182 by collecting multiple “stripes” of image data. After scanning a stripe of image data, stage 178 decelerates and brings sample 182 to a substantially complete stop. Stage 178 then moves in a direction orthogonal to the scanning direction to position sample 182 for scanning of a subsequent stripe of image data (e.g., an adjacent stripe). Additional stripes are subsequently scanned until an entire portion of sample 182 or all of sample 182 is scanned. In other examples, objective lens 186 can be moved (i.e., by objective lens positioner 198) under control of system controller 100 and/or motion controller 174 in the same manner, such that the speed of objective lens 186 is synchronized with the line rate of line scan camera 192. Similarly, objective lens 186 can also be moved to collect multiple “stripes” of image data.
During the foregoing mode of digital scanning of sample 182, a contiguous digital image of sample 182 is acquired as contiguous fields of view that are combined together to form an image stripe. All adjacent image stripes are similarly combined together to form a contiguous digital image of a portion or the entirety of sample 182. The scanning of sample 182 may include acquiring stripes extending in the X-coordinate direction and/or the Y-coordinate direction. The scanning of sample 182 may be either top-to-bottom, bottom-to-top, or both (bi-directional), and may start at any point on the sample. Alternatively, the scanning of sample 182 may be either left-to-right, right-to-left, or both (bi-directional), and may start at any point on the sample. Additionally, it is not necessary that image stripes be acquired in an adjacent or contiguous manner. Furthermore, the resulting image of sample 182 may be an image of the entire sample 182 or only a portion of sample 182.
Segmentation module 110, tissue detection module 120, focus scoring module 130, image collection module 140, and data labeling module 150 collectively enable image analysis system 10 to operate image scanner 160 to collect histological images and, further, to analyze those images. In particular and as will be explained in more detail subsequently, segmentation module 110, tissue detection module 120, and focus scoring module 130 enable the automated evaluation of the focus of histological images collected by image scanner 160 using one or more trained computer-implemented machinelearning models. Accordingly, segmentation module 110, tissue detection module 120, and focus scoring module 130 enable the evaluation of the performance of auto-focusing performed during image collection. As will also be explained in more detail subsequently, image collection module 140 and data labeling module 150 enable the automated generation of Z-stacked images containing real blur and the subsequent automated labeling of those images to train a computer-implemented machine-learning algorithm to score image focus.
Segmentation module 110 is a software module of system controller 100 and includes one or more programs for segmenting images collected by image scanner 160 into regions defining portions of the image. The regions defined by segmentation module 110 can be referred to as “tiles” or “patches” in some examples. The regions defined by segmentation module 110 can be of a pre-defined size and, for a given histological image, collectively span the entire pixel area of the histological image. The program(s) of segmentation module 110 can, in at least some examples, divide a histological image (such as a whole-slide image) into a plurality of equally-sized regions that can be analyzed individually by the program(s) of tissue detection module 120 and focus scoring module 130.
Tissue detection module 120 is another software module of system controller 100 and includes one or more programs for detecting whether image data includes tissue. The program(s) of tissue detection module 120 are generally described herein as being used to analyze image region data generated by segmentation module 110, but in other examples, the program(s) of tissue detection module 120 can be configured to analyze image region data of any suitable size (i.e., of any pixel area, covering any portion of a sample mounted to a slide, etc.).
Focus scoring module 130 is yet a further software module of system controller 100 and includes one or more programs for evaluating and/or scoring the focus of digital image data. The program(s) of focus scoring module 130 are generally described herein as being used to analyze image region data generated by segmentation module 110 and, in some examples, a subset of image region data including tissue as determined by the program(s) of tissue detection module 120. However, in other examples, the program(s) of focus scoring module 130 can be configured to analyze image region data of any size (i.e., of any pixel area, covering any portion of a sample mounted to a slide, etc.). Focus scoring module 130 uses a computer- implemented machine-learning model trained to assess focus of image data to generate focus scores. The trained computer-implemented machinelearning model can be, for example, a convolutional neural network having any suitable number of convolutional layers and, in some examples, can be trained using labeled data generated by data labeling module 150.
Focus scoring module 130 uses a dual-threshold scheme to assess image focus, as will be explained in more detail subsequently and particularly with respect to FIGS. 3-6. Briefly, focus scoring module 130 can score each image region generated by segmentation module 110 and/or each image region found to contain tissue according to tissue detection module 120. Focus scoring module 130 can then determine the number of image regions that are in-focus according to a focus score threshold. The number of infocus regions is then compared to a second threshold for in-focus area to determine if the image as a whole is in-focus. Each threshold can be tuned or selected such that the ultimate determination of whether the image is in focus can reflect whether the image is sufficiently in focus to be useful for diagnostic purposes or other downstream image analysis tasks.
Image collection module 140 is another software module of system controller 100 and includes one or more programs for collecting histological images using image scanner 160. The program(s) of image collection module 140 can be configured to control any number of components of image scanner 160, including but not limited to motion controller 174, line scan camera 192, camera 194, and/or any combination thereof. Image collection module 140 can include one or more programs for controlling autofocusing or other initial focusing of image scanner 160 prior to image capture. Image collection module 140 can also be used to collect Z-stacked images. As referred to herein, “Z-stacked” images are images of a particular slide or histological sample taken at different Z-distances between objective lens 186 and a sample (e.g., sample 182), and accordingly between objective lens 186 and stage 178. As such, Z-stacked images have varying focus and can be used to more accurately simulate poorly-focused histological images as compared to existing techniques that apply artificial or synthetic blur, as will be explained in more detail subsequently and particularly with respect to FIGS. 7A-7B, 8, and 9.
Data labeling module 150 is a further software module of system controller 100 and includes one or more programs for generating labeled data suitable for training a computer-implemented machine-learning model from Z-stacked histological slide images. Data labeling module 150 can use the program(s) of segmentation module 110 and/or tissue detection module 120 to segment Z-stacked images and, in relevant examples, exclude portions of the Z-stacked images that do not include tissue. Data labeling module 150 can then sequence images corresponding to the same X-coordinate and Y -coordinate regions of the imaged slide into a set of Z-stacked image regions. Each image in each set of Z- stacked image regions can be analyzed for overall sharpness, and the sharpness of each image region in each set of Z-stacked image regions can be normalized to the image region in that set having the highest sharpness. The image regions can be labeled with the normalized focus values and used for training the computer-implemented machine-learning model. The computer-implemented machine-learning model can be, for example, a convolutional neural network having any suitable number of convolutional layers. AUTOMATED FOCUS DETECTION
FIG. 3 is a schematic diagram of workflow 201, which is a graphical representation of a workflow for focus analysis by system controller 100. Workflow 201 includes raw image 202, processed image 205, pass 212A, and fail 212B. Raw image 202 includes tissue 204 and processed image 205 includes tissue 204, tissue-containing regions 206, in-focus regions 210, and out-of-focus regions 211. Workflow 201 is a graphical representation of an example histological image that is processed by segmentation module 110, tissue detection module 120, and focus scoring module 130 according to method 600, discussed subsequently and particularly with respect to FIG. 6.
Raw image 202 is an image of a histological slide that includes tissue 204. Tissue 204 has been H&E stained for diagnostic (e.g., histopathologic) analysis. The images shown in FIG. 3 (i.e., raw image 202 and processed image 205) are two-dimensional images including pixel data extending in the X-coordinate and Y-coordinate directions shown in FIG. 2. Raw image 202 is collected using image scanner 160, and is captured at a Z-distance between tissue 204/stage 178 and objective lens 186 that is predicted to result in an in-focus image of tissue 204 according to an auto-focusing algorithm.
Raw image 202 is processed into processed image 205 using segmentation module 110, tissue detection module 120, and focus scoring module 130. In particular, raw image 202 is processed using segmentation module 110 to segment raw image 202 into image regions to cover the entirety of raw image 202, and is subsequently processed using tissue detection module 120 to identify regions that do not contain tissue, allowing regions that do not contain tissue to be excluded from further processing. The processing performed by segmentation module 110 and tissue detection module 120 creates tissue-containing regions 206, which are areas having pre-defined pixel dimensions according to the program(s) of segmentation module 110 and which include tissue according to tissue detection module 120. Tissue-containing regions 206 are represented as boxes in processed image 205.
To generate tissue-containing regions 206, segmentation module 110 first defines a number of regions according to the pixel dimensions desired for the regions as well as the pixel dimensions of raw image 202. Segmentation module 110 can, for example, define regions covering the entirety of raw image 202 by gridding raw image 202 into a plurality of gridded regions. In such examples, grid dimensions and placement can be selected to generate regions useful for image analysis by tissue detection module 120 and/or focus scoring module 130. The gridded regions generated in this manner can be rectangular or substantially rectangular and, further, can be of uniform or substantially uniform size. Advantageously, segmentation by segmentation module 110 can reduce the size of input data provided to and improve the accuracy of focus scores generated by the computer- implemented machine-learning model used by focus scoring module 130.
Tissue detection module 120 then analyzes image data within each region and excludes regions containing an insufficient amount of tissue from further processing (hence the exclusion of regions defined by segmentation module 110 in processed image 205). Notably, as tissue detection module 120 analyzes image regions for an amount of tissue present (e.g., by pixel coverage or area detected to belong to tissue 204), tissuecontaining regions 206 can include pixel area that does not include tissue 204 and, further, portions of tissue 204 may not be covered by a region 206.
Focus scoring module 130 then generates a focus score for each of tissuecontaining regions 206 using a trained computer-implemented machine-learning model. Infocus regions 210 have focus scores found to be sufficiently in-focus according to a focus score threshold and are represented as empty boxes. Out-of-focus regions 211 have focus scores that do not satisfy the focus score threshold and are represented as boxes filled with a dot pattern. The empty boxes and pattern-filled boxes used to represent in-focus regions 210 and out-of-focus regions 211 in FIG. 3 are merely illustrative and are depicted for explanatory purposes. The program(s) of focus scoring module 130 can then determine whether the area defined by in- focus regions 210 satisfies a separate in- focus area threshold. The in-focus area threshold can be, for example, a total pixel area or, alternatively, can be a percentage of pixel area including tissue. The percentage of pixel area can be defined as, for example, a percentage of tissue-containing regions 206 that are in-focus regions 210. The area threshold can also be, for example, a number of tissue-containing regions 206 that are in-focus regions 210. Workflow 201 moves to pass 212A or fail 212B according to whether the second, area-based threshold is satisfied. More specifically, if the area threshold is satisfied (e.g., if there is the requisite percentage of tissue-containing regions 206 that are in-focus regions 210), workflow 201 moves to pass 212A for raw image 202. If the area threshold is not satisfied (e.g., if the raw image 202 is not found to have the requisite percentage of in-focus regions 210 of tissue-containing regions 206), workflow 201 moves to fail 212B. The thresholds used in workflow 201 (i.e. , the in-focus threshold and the area threshold) can be selected such that images that “pass” are suitable for diagnostic purposes, and images that “fail” are not suitable for diagnostic purposes. Whether an image passes or fails the analysis outlined in workflow 201, or ultimately whether the image is found to be suitable for diagnostic purposes by system controller 100 can be stored to memory 104 and associated with the analyzed image or an identifier (e.g., file name, etc.) linked to or otherwise associated with the analyzed image.
FIG. 4 and FIG. 5 are schematic diagrams of workflow 240 and workflow 270, respectively, which are similar to workflow 201 but mask focus data to histological images for interpretation by a human operator. FIG. 4 and FIG. 5 are discussed together herein. FIG. 4 depicts workflow 240, which processes raw image 250 into processed image 260. Raw image 250 includes tissue 252 and processed image 260 includes tissue 252, infocus area 262A, and out-of-focus are 262B. FIG. 6 depicts workflow 270, which processes raw image 280 into processed image 290. Raw image 280 includes tissue 282 and processed image 290 includes tissue 282, in-focus areas 292A, and out-of-focus area 292B. The images shown in FIGS. 4-5 are two-dimensional images including pixel data extending in the X-coordinate and Y-coordinate directions shown in FIG. 2. Raw image 250 (FIG. 4) and raw image 280 (FIG. 5) are collected using image scanner 160 and are captured at Z- distance(s) that are predicted to result in an in-focus image according to an auto-focusing algorithm.
Raw image 250 and raw image 280 are images of two different histological slides that include tissue 252 and tissue 282, respectively. Tissue 252 and tissue 282 are tissue samples that have been H&E stained for diagnostic (e.g., histopathologic) analysis. The images shown in FIGS. 4-5 (i.e., raw image 250, processed image 260, raw image 280, and processed image 290) are two-dimensional images including pixel data extending in the X-coordinate and Y-coordinate directions shown in FIG. 2. Raw image 250 and raw image 280 are is collected using image scanner 160, and are captured at Z-distances (e.g., between stage 178 and objective lens 186) that are predicted by an auto-focusing algorithm to result in in-focus images of tissue 252 and tissue 282.
Processed image 260 and processed image 290 are focus maps of tissue 252 and tissue 282, respectively, and can be used by an operator (i.e., a human operator) to identify in-focus and out-of-focus regions of tissue 252 and tissue 282, respectively. The focus maps are generated partially according to the steps used to generate processed image 205 in workflow 201 (FIG. 2). More specifically, each of raw images 250, 280 are processed using the steps explained previously for generating processed image 205 from raw image 202 in workflow 201 (FIG. 2). The in-focus regions and out-of-focus regions are then masked to the image data corresponding to tissue 252 and tissue 282 to create processed image 260 and processed image 290, respectively. Image data corresponding to tissue can be identified using the program(s) of tissue-detection module 120 and/or any other suitable program for identifying tissue. Accordingly, in processed image 260, infocus area 262A and out-of-focus area 262B are area masks that generally conform to respective portions of the pixel area(s) occupied by tissue 252 and that are generated by masking in-focus regions and out-of-focus regions generated according to workflow 201 (FIG. 2) to the pixel data for tissue 252 in raw image 250. Similarly, in processed image 290, in-focus area 292A and out-of-focus area 292B are area masks that generally conform to the pixel area(s) occupied by tissue 282 and that are generated by masking in-focus regions and out-of-focus regions generated according to workflow 201 (FIG. 2) to the pixel data for tissue 282 in raw image 280. One or more programs of system controller 100, such as one or more programs of focus scoring module 130, can be used to mask focus image to pixel data corresponding to tissue in histological images generated by image scanner 160.
Processed image 260 and processed image 290 and, in particular, in-focus areas 262 A, 292 A and out-of-focus areas 262B, 292B can be presented to users analyzing raw image 250 and raw image 280, respectively, to aid in analysis (e.g., diagnostic analysis, histopathologic analysis, etc.) of raw image 250 and raw image 280, respectively. In at least some examples, in-focus area 262A and out-of- focus 262B can be presented as a toggleable layer when a user views image data for raw image 250 to enable a user to selectively toggle between viewing raw image 250 and processed image 260. Similarly, in at least some examples, in-focus area 292A and out-of- focus 292B can be presented as a toggleable layer when a user views image data for raw image 280 to enable a user to selectively toggle between viewing raw image 280 and processed image 290. Processed image 260 and processed image 290, and more specifically in-focus areas 262A, 292A and out-of-focus areas 262B, 292B, can be viewable by a user using user interface 106 and/or any other suitable user interface device connected to (or belonging to a device connected to) image analysis system 10 (e.g., via interface system 176). The user interface can present one or more graphical buttons that a user can use to selectively view in-focus areas and out-of- focus areas (e.g., in-focus areas 262A, 292A and out-of-focus areas 262B, 292B) generated by the program(s) of system controller 100. Advantageously, the masked images shown in FIGS. 4 and 5 can simplify graphical presentation of out-of-focus and in-focus determinations generated by the program(s) of system controller 100.
In-focus areas 262A, 292A are depicted as transparent or empty regions and out-of-focus areas 262B, 292B are depicted as areas filled with a dot pattern in FIGS. 4 and 5 for illustrative purposes. In other examples, in-focus areas and out-of-focus areas generated according to workflows 240, 270 can be presented to an operator using any suitable graphical style. For example, in-focus areas can be shaded one color (e.g., green) and out-of-focus areas can be shaded a different color (e.g., red). Any suitable color, pattern, line weight, etc. can be used to delineate in-focus and out-of-focus regions in images processed according to workflows 240, 270.
FIG. 6 is a flow diagram of method 600, which is a method of scoring image focus performable by the image analysis system 10. Method 600 includes steps 602-634 of generating a predicted in-focus Z-distance with an auto-focusing algorithm (step 602), positioning at least one of a lens or a stage according to the predicted in-focus Z-distance (step 604), capturing an image scan (step 606), receiving the image scan (step 608), segmenting the image scan into a plurality of regions (step 610), identifying region(s) containing tissue (step 612), generating a focus score for each region with a machinelearning model (step 614), determining whether the focus score for each region satisfies a threshold (step 616), designating regions as in-focus (step 618), designating regions as out- of-focus (step 620), determining whether the number of in-focus regions satisfies a threshold (step 622), generating an electronic indication that the image is acceptable for diagnostic use (step 624), outputting a user-understandable representation of the electronic indication (step 626), generating an electronic indication that the image is unacceptable for diagnostic use (step 628), outputting a user-understandable representation of the electronic indication (step 630), prompting a user to re-scan the slide (step 632), and automatically re-scanning the slide (step 634). Method 600 is discussed generally herein with respect to image analysis system 10, but method 600 can be performed using any suitable system to generate determinations of histological image focus and confer advantages related thereto.
Steps 602-634 are performed for an individual slide containing a tissue sample. The tissue sample can be stained using any suitable staining method and, in some examples, can be unstained. The tissue sample can be mounted to the slide using any suitable mounting method and the slide can be composed of any suitable substrate material.
In step 602, processor 102 of system controller 100 generates a predicted infocus Z-distance with an auto-focusing algorithm. The in-focus Z-distance can describe a Z-position of one or both of objective lens 186 and stage 178 and/or a distance between objective lens 186 and stage 178 is predicted, according to the auto-focusing algorithm, to position the focal plane of objective lens 186 to create an in- focus image of the tissue sample. Image scanner 160 may capture one or more images or portions of images, or may use other sensor data (such as data from one of line scan camera 192 and/or camera 194) to provide data to the auto-focusing algorithm to generate a predicted in-focus z-distance. In at least some examples, system controller can cause at least one of objective lens 186 or stage 178 to move in the Z-direction (shown by arrow Z in FIG. 2) using, i.e., objective lens positioner 198 and/or stage positioner 199, respectively, and can collect sensor and/or image data at a plurality of Z-positions, and can analyze the resultant data to determine a predicted in-focus Z-position. The auto-focusing algorithm used by system controller can be any suitable auto-focusing algorithm (including contrast-based autofocusing and phasebased autofocusing algorithms) and other methods of auto-focusing known to those of skill in the art are contemplated herein.
In step 604, at least one of objective lens 186 or stage 178 is positioned to provide the Z-distance generated in step 602 between objective lens 186 and stage 178. Processor 102 and/or motion controller 174 can adjust the position of either or both of objective lens 186 and stage 178 by controlling operation of objective lens positioner 198 and/or stage positioner 199, respectively.  Step 602 and step 604 are optional steps of method 600 and are performed in examples where it is advantageous to use an algorithmic autofocusing system rather than manual focusing or another suitable type of focusing to acquire the image scan in subsequent step 606. In all examples, some variety of focusing is performed prior to step 606 to place or attempt to place the focus plane of objective lens 186 at a suitable position for collecting an in-focus image of the histological image sample. Advantageously, subsequent steps 606-634 enable automated evaluation of focusing performed prior to step 606 in order to determine whether the image collected in subsequent step 606 is sufficiently in focus to be useful for diagnostic (e.g., histopathological) purposes. In some examples, steps 602 and 604 can be performed by a system other than image analysis system 10.
In step 606, image scanner 160 captures an image scan of the histological slide. The image can include any suitable portion of the histological slide and the tissue contained therein. In at least some examples, the image capture in step 606 is a whole-slide image and includes an entirety of the sample mounted to the imaged slide and/or the entire area of the slide. Image scanner 160 can capture the image scan using line scan camera 192 and/or camera 194 using any suitable technique for capturing a histological image. For example, image scanner 160 can capture the image in step 606 with line scan camera 192 by moving the histological sample at a constant velocity as described previously in the discussion of FIG. 2. In these examples, the image can be captured in multiple stripes, as described previously with respect to FIG. 2. The image scan can be stored to memory 104 of system controller 100 and/or any other suitable computer-readable memory (e.g., a memory device connected to, housed on, etc. image scanner 160).
Step 606 can be performed by a device or system other than image analysis system 10 in some examples. Further, in some examples, steps 602-606 can be performed substantially prior to performance of steps 608-634, such that a gap in time exists between performance of step 606 and performance of step 608. For example, steps 602-606 can be performed for all histological slides to be imaged (e.g., all of slides 171) and, subsequent to image collection, steps 608-634 can be performed to determine whether the collected images are suitable for diagnostic purposes. The aforementioned example is included for illustrative purposes and other schemes in which image collection and focus analysis are performed substantially asynchronously are contemplated herein.
In step 608, processor 102 receives the image scan. Processor 102 can, for example, receive the image scan by retrieving or otherwise accessing data for the image scan stored to memory 104.  In step 610, processor 102 segments the image scan received in step 608 into a plurality of image regions. Processor 102 can execute the program(s) of segmentation module 110 to create the image regions from the image received in step 608. As described in the discussion of FIGS. 1-2, the program(s) of segmentation module 110 can be configured to generate any suitable number of image regions from the image received in step 608. The image regions generated in step 610 each have different X-coordinate and Y- coordinate regions of the image received in step 608, but can be differently sized and/or can be identically- or substantially identically-sized. For example, the image regions can be generated by gridding the image using a regular or irregular grid, as described previously with respect to FIGS. 1-2. In at least some examples, the image regions are generated by gridding the image scan received in step 608 into regions that are rectangular (or substantially rectangular) and identically-sized (or substantially identically-sized). In at least some examples, each image can be segmented into approximately 8,000 regions.
In step 612, processor 102 identifies regions containing tissue. Step 612 is an optional step and is performed in examples where it is advantageous to exclude regions generated in step 610 that do not contain tissue from subsequent steps of method 600. For example, the image scan received in step 608 may not include any regions that do not include tissue (i.e., all pixel area of the image scan received in step 608 is image data of tissue). In these examples, it may be advantageous to omit step 612 from method 600. In examples including step 612, method 600 proceeds to step 612 from step 610 and then proceeds to step 614. In examples of method 600 omitting step 612, method 600 proceeds directly to step 614 from method 600.
Processor 102 executes the program(s) of tissue detection module 120 to determine whether each image region generated in step 610 contains a sufficient pixel area corresponding to tissue to be used in subsequent steps of method 600. The threshold amount of tissue can be selected based on operator preference and/or another useful parameter, such as the sensitivity to background or non-tissue image data and/or training of the computer- implemented machine-learning model used in subsequent step 614. In at least some examples, the threshold used to exclude image regions can be half of the pixel area of an image region, such that the program(s) of tissue detection module 120 are configured to exclude a region from further processing via method 600 if less than half of the pixel area of the image region corresponds to tissue.
Tissue detection module 120 can use any suitable tissue detection algorithm for analyzing image data to determine whether the image data contains tissue. Image regions not found by the program(s) of tissue detection module 120 to contain a sufficient amount (e.g., pixel area) of tissue can be excluded from subsequent steps of method 600 and images found by the program(s) of tissue detection module 120 to contain a sufficient amount (e.g., pixel area) of tissue can be used for subsequent steps of method 600.
In step 614, processor 102 generates a focus score for each region using a computer-implemented machine-learning model trained to generate focus scores based on image data inputs. In examples of method 600 omitting step 612, the regions analyzed in step 614 can be all regions generated during step 610. In examples of method 600, the regions used in step 614 can be regions found to contain a sufficient amount (e.g., pixel area) of tissue by the program(s) of tissue detection module 120.
Processor 102 executes the program(s) of focus scoring module 130 to generate the focus scores in step 614. A computer-implemented machine-learning model is used to generate a score for each image region (i.e., the image region is used as an input to the computer-implemented machine-learning model). The computer-implemented machine-learning model can be any suitable computer-implemented machine-learning model and is trained to generate focus scores varying between a pre-defined range of values based on image data, such that the scores generated in step 614 can be used to determine the degree to which each image region is in focus or out of focus. The computer- implemented machine-learning model can be, for example, a neural network, such as a convolutional neural network having at least one convolutional layer.
Step 616 is performed for each image region for which a focus score is generated in step 614. In step 616, processor 102 determines whether the focus score for each image region satisfies a focus score threshold. The focus score threshold can be a threshold value and whether a focus score satisfies the threshold can be determined by comparing the focus score to the threshold value. In examples where greater focus scores are associated with more in-focus images, a focus score can satisfy the threshold value by matching and/or exceeding the threshold value. In examples where lower focus scores are associated with more in-focus images, the focus score can satisfy the threshold value by being less than or less than or equal to the threshold value. Whether greater focus scores or lower focus scores are associated with more in-focus focus images can be determined by the training of the computer-implemented machine-learning model used in step 614. That is, the scores generated in step 614 are based at least in part on the scoring scheme used to label the training data used to train the computer-implemented machine-learning model and, accordingly, the scheme used to assess whether focus scores satisfy the threshold in step 616 can be also be determined, at least in part, according to the scoring scheme used to label the training data. The specific value(s) that satisfy the threshold can be selected in combination with the threshold used in subsequent step 622 so that method 600 can accurately assess whether histological images are diagnostically-useful.
If an image region satisfies the threshold, method 600 proceeds to step 618 to designate the image region as in-focus. Method 600 then returns to step 616 to determine whether the remaining image regions satisfy the focus score threshold. If an image region does not satisfy the focus score threshold, method 600 proceeds to step 620 to designate the image region as out-of- focus. Method 600 then returns to step 616 to determine whether the remaining image regions satisfy the focus score threshold. Method 600 is repeated for each image region until all image regions for which focus scores were generated in step 614 have been designated as in-focus or out-of-focus.
When all image regions are evaluated, method 600 proceeds to step 622. In step 622, processor 102 determines whether the in-focus regions designated as such in step 618 satisfy an area or coverage threshold. Step 622 can be performed by determining whether the number of regions satisfies a numerosity threshold. More specifically, step 622 can be performed by determining whether a number of in-focus regions is greater than a numerosity threshold. The numerosity threshold can be pre-determined or can be determined based on the number of regions for which focus scores were generated in step 616. For example, the numerosity threshold can be a minimum percent of scored regions. The threshold number of regions can accordingly be determined by multiplying the total number of regions for which focus scores were generated in step 614 by a specified minimum percent of scored regions, and the number of in-focus regions as designated in step 618 can then be compared against the threshold in step 622. In at least some examples, the threshold used in step 622 is 80% of all scored regions, such that at least 80% of focus scores generated in step 614 must satisfy the threshold in step 616.
In some examples, the area threshold used in step 622 is a threshold of contiguous regions. For example, the area threshold can be a number of contiguous out-of- focus regions. In this example, processor 102 can determine whether a total number of contiguous regions that are out-of-focus according to the determinations made in iterations of step 616 is greater than a threshold value for contiguous out-of-focus regions. If the number of contiguous regions that are out-of-focus (i.e., according to step 616) is sufficiently large to exceed the threshold value, the image does not satisfy the threshold in step 622. In some examples, images having a relatively large number of discontiguous out- of-focus regions may still be suitable for diagnostic use. Advantageously, using a threshold of contiguous out-of-focus regions can prevent images from improperly being designated as unsuitable for diagnostic use in these examples. Substantially the same analysis can be used where the area threshold is a number of contiguous in-focus regions.
In further examples, the area threshold used in step 622 can be a pixel area and the total pixel area of the regions designated as in- focus in step 618 and/or the regions designated as out-of-focus regions in step 620 can be compared against the threshold value in step 622.
If the area threshold is satisfied in step 622, method 600 proceeds to step 624. In step 624, processor 102 generates an indication that the image is acceptable for diagnostic user. The indication generated in step 622 can be representative of text, one or more icons, or another suitable manner of describing that the image received in step 608 is sufficiently in focus for diagnostic use. The indication can be, for example, text stating “Pass” or any other suitable phrase for indicating that the image is sufficiently in focus for downstream diagnostic use. The indication can be stored to memory 104 and associated with the image scan data received in step 608.
In step 626, user interface 106 outputs a user-understandable representation of the electronic indication generated in step 624. The user-understandable representation can be, for example, text that can be displayed by a display of user interface 106, audio that can be output by an audio device of user interface 106, etc. The user-understandable representation can also be, for example, one or more icons that conveys that the image is suitable for diagnostic use. Step 626 can be performed automatically and/or it can be performed in response to user inputs (e.g., when a user accesses the determination of whether the image is suitable for diagnostic use).
If the area threshold is not satisfied in step 622, method 600 proceeds to step 628. Step 628 is substantially similar to step 624, but the electronic indication generated in step 624 indicates that the image is unacceptable for diagnostic use. From step 628, method 600 can optionally proceed directly to step 630, step 632, or step 634.
Step 630 is substantially similar to step 626, but outputs the indication generated in step 628. Method 600 can optionally proceed to step 632 or step 634 following step 630, or method 600 can stop following step 630.
In step 632, a user is prompted via user interface 106 to re-scan the slide. Processor 102 can cause user interface 106 to display one or more graphical elements including one or more user-interactable elements that the user can interact with to cause image analysis system 10 to re-scan the slide analyzed via steps 608-622. If the user provides an input (i.e., via user interface 106) indicating that the user would re-scan the slide, method 600 can proceed to method 602 to perform an additional iteration of autofocusing, capture, and analysis. If the user provides an input indicating that the user would not like to re-scan the slide, method 600 stops.
In step 634, the slide imaged in step 608 is automatically re-scanned. System controller 100 can be configured to automatically re-scan slides that do not pass method 600 (i.e., that are found to not be suitable for diagnostic use according to steps 608-622). In these examples, method 600 can automatically proceed (i.e., without requiring additional user input) to step 602 to perform an additional iteration of auto-focusing, capture, and analysis.
In examples where it is desirable to output a user-understandable representation of the indication generated in step 628, method 600 can include step 630 and method 600 can proceed to step 630 from step 628. It may be unnecessary, for example, to include step 630 in examples where automatic re-scan is to be performed in step 634 or where a prompt for re-scan may convey that a particular image is not acceptable for diagnostic use. In examples where automatic re-scan is undesirable but where it may be desirable for users to selectively re-scan images, method 600 can include step 632 and method 600 can proceed to step 632 from step 628 or step 630, if included in method 600. In examples where automatic re-scan is desirable, method 600 can include step 634 and method 600 can proceed to step 634 from step 628 or step 630, if included in method 600.
Advantageously, method 600 enables automated focus evaluation of histological images of tissue-containing slides. Notably, method 600 includes step 610 and optionally step 612, which improve the computational efficiency with which computer- implemented machine-learning models, such as convolutional neural networks, can be used to perform focus detection in step 614 by reducing the size of the input images for which focus scores are generated and, in examples including step 612, only using the computer- implemented machine-learning model to perform focus score predictions for tissuecontaining image regions. Additionally, method 600 includes the use of separate thresholds to identify in-focus regions and, further, to identify whether the total out-of-focus area is large enough to render the overall image unsuitable for diagnostic use. Stated differently, the two-threshold approach used by method 600 allows first for out-of-focus regions to be identified, and subsequently for a determination of whether enough of the image (or enough of the portions of the image depicting tissue) is in-focus for the image to be diagnostically useful. As histological images of tissue are likely to contain out-of-focus regions, the use of both thresholds of method 600 increases the likelihood that histological images can be accurately identified as acceptable or unacceptable for diagnostic use.
Further, the use of dual thresholds allows for greater user configurability. In at least some examples, both thresholds (i.e., the threshold used in step 616 and the threshold used in step 622) can be separately selected and/or adjusted by users to allow for more precise control over the degree of focus required for an image to be identified as infocus or out-of-focus according to method 600. As image focus and, further, whether an image is sufficiently in-focus to be diagnostically useful are often at least partially a subjective determination, the use of two thresholds by method 600 increases the degree to which an operator can configure method 600 to accurately identify in-focus and out-of- focus images according to the operator’s preferences and/or according to a particular downstream application of histological images analyzed using method 600. Operator preferences for each threshold can be provided via a user interface, such as user interface 106. In at least some examples, user interface 106 can display graphical sliders that a user can interact with to select each threshold.
AUTOMATED LABELING OF IMAGE DATA WITH FOCUS INFORMATION
FIG. 7A is a schematic diagram of Z-stacked images 710 generated by image collection module 140. Z-stacked images include sample 714, image region set 716, and sharpest image region 718. FIG. 7B is a schematic diagram of image region set 716 extracted from Z-stacked images 710, and also includes sharpest image region 718. FIGS. 7A and 7B each depict arrow X, arrow Y, and arrow Z, which indicate the X-coordinate, Y-coordinate, and Z-coordinate directions, respectively.
Z-stacked images 710 are generated by capturing a series of images of sample 714, which is a histological sample. Sample 714 is an H&E stained sample in the depicted example, but in other examples, sample 714 can be any other kind of histological sample suitable for imaging. Processor 102 can move one or both of objective lens 186 and stage 178 (i.e., via objective lens positioner 198 and stage positioner 199, respectively) to different Z-distances and can cause image scanner 160 (i.e., via line scan camera 192 and/or camera 194) to capture images of sample 714 at each Z-distance. Each Z-distance corresponds to a different position of the focal plane of objective lens 186 relative to the position of sample 714, creating a set of images having differing degrees of focus. The Z- axis position of each image of Z-stacked images 710 shown in FIG. 7A corresponds to the Z-distance between objective lens 186 and stage 178 while the image was captured, and accordingly also corresponds to the position of the focal plane of objective lens 186 relative to the position of sample 714 used to capture that image.
The aforementioned process of capturing images of a single sample at varying Z-positions (e.g., between objective lens 186 and stage 178) is referred to herein as “Z-stacking” and the resultant images are referred to herein as “Z-stacked images” or a “Z-stacked image set.” Z-stacked images 710 shown in FIG. 7A are taken at regular or constant Z-coordinate intervals, such that the Z-distance between objective lens 186 and stage 178 is adjusted by the same amount to capture each image of Z-stacked images 710 and the focus distance change (i.e., the amount by which the focal plane of objective lens 186 moves relative to sample 714) is the same between each image in Z-stacked images 710. However, in other examples, Z-stacked images can be taken at known but irregular or non-constant Z-coordinate intervals, such that the Z-distance change used to create the Z- stacked images varies. Further, in some examples, each image in a set of Z-stacked images can be referred to as a “Z-layer” and can be assigned a numeric value describing the relative Z-distance between objective lens 186 and stage 178 (i.e., and therefore Z-position of the focal plane of objective lens 186 relative to sample 182) while the image was captured.
Image region set 716 is derived from Z-stacked images 710 and is created by segmenting Z-stacked images 710 using the program(s) of segmentation module 110. All image regions of image region set 716 have the same X-coordinate span and Y- coordinate span, such that they represent the same portion of sample 714. Advantageously, as Z-stacked images 710 each capture the same X-coordinate extent and Y-coordinate extent of sample 714, any region of one image of Z-stacked images 710 (as defined by the X-coordinate and Y -coordinate range of that region) includes the same portion of sample 714 as regions defined by the same X-coordinate and Y-coordinate ranges of all other images of Z-stacked images 710. System controller 100 can create image region set 716 by segmenting an image region from each image of Z-stacked images 710 that is defined by the same X-coordinate range and Y-coordinate range. System controller 100 can then associate those image regions into image region set 716 for further processing to create labeled data suitable for use in training a computer-implemented machine-learning model to score relative image focus, as will be explained in more detail subsequently and particularly with respect to the discussion of FIGS. 8-9.
FIG. 7B shows image region set 716 removed from or otherwise in isolation from the remainder of the image data of Z-stacked images 710. Image region set 716 is merely one example of a set of image regions that can be derived from Z-stacked images 710. As will be explained in more detail subsequently, sets of image regions can be derived (i.e., by system controller 100) that cover or represent the entirety of Z-stacked images 710 and/or the entirety of the tissue-containing area of Z-stacked images 710.
Advantageously, image region sets like image region set 716 provide multiple images of the same portion of sample 714 but having different degrees of blur. To create labeled data suitable for training a computer-implemented machine-learning model, system controller 100 scores the sharpness of each image of image region set 716 and then normalizes the relative sharpness of each image to the image region having the highest sharpness.
System controller 100 can then score the sharpness of the image regions of image region set 716 using any suitable sharpness measure. In at least some examples, system controller 100 uses a Laplacian sharpness measure or a Tenengrad sharpness measure to generate the sharpness scores for image region set 716. As in-focus images are generally sharper than out-of-focus images, the sharpness of the images of a set of image regions can be used to understand relative out-of-focus degree within the set of images. In image region set 716, sharpest image region 718 is the most in- focus and, consequently, has the greatest sharpness score. As referred to herein, the “greatest” or “highest” sharpness score of a set of image regions refers to the score that indicates which image is the most infocus, regardless of how relative focus is represented by the outputs of the particular sharpness measure used. In some examples where the sharpness measure used outputs greater values to represent higher sharpness, the highest sharpness score of the set of images corresponds to the most in-focus image region. However, if the sharpness measure used outputs lower values to represent images that are more in-focus (i.e., having higher pixel sharpness), the “highest” or “greatest” sharpness score for a set of images will be the lowest value output by the sharpness measure.
After sharpness scores are generated for each image of image region set 716, the sharpness scores can be normalized to the sharpness of the sharpest image region 718, as sharpest image region 718 has the highest sharpness score. The images of image region set 716 can then be labeled with their respective normalized sharpness scores and can, subsequently, be used for training a computer- implemented machine-learning model (e.g., a convolutional neural network) to predict relative sharpness of new image data.
Existing methods of generating training data for training a computer- implemented machine-learning model generally provide sets of images having differing blur by applying differing intensities of an artificial blur. For example, many existing methods of generating training data for training a computer-implemented machine-learning model for out-of-focus detection use images blurred with a gaussian blurring algorithm. However, out-of-focus blur from image scanners, such as image scanner 160, is unlikely to follow the gaussian distribution of blur created by a gaussian blurring algorithm. As such, a computer-implemented machine-learning model trained with artificially-blurred images may be likely to misidentify the focus or blurriness of real histological images.
Conversely, the blur of the image regions of image region set 716 is natural blur generated by the relative position of the focal plane of objective lens 186 and sample 714. As such, the blur of out-of-focus or relatively out-of-focus regions of image region set 716 is more likely to approximate the blur that occurs when image scanner 160 is focused improperly (i.e., when an incorrect Z-distance between objective lens 186 and stage 178 is used) than methods of generating artificial blur. To this extent, a computer-implemented machine-learning model trained to identify relative image focus using image region set 716 or other image region sets generated from Z-stacked images is likely to have greater accuracy in scoring the relative blur of real images captured by image scanner 160 or another suitable image scanning system than image sets generated by an artificial blurring technique.
FIG. 8 depicts graph 800 of normalized focus scores 802 for the images of an image region set. Normalized focus scores 802 are another example of normalized focus scores and are generated in the manner described generally with respect to FIGS. 7A-7B and in more detail subsequently with respect to FIG. 9.
Graph 800 includes X-axis 810, which corresponds to the relative Z-layer of each image of image region set, and Y-axis 812, which corresponds to the normalized (i.e., relative) sharpness score, as well as point 830, which corresponds to the sharpest image of the image region set. As shown in FIG. 8, normalized focus scores 802 vary between 0 and 1. Point 830, which corresponds to the sharpest image, is a focus score that is normalized to itself, such that the resultant normalized focus score is a value of 1. As such, in normalized focus scores 802, the sharpest image score is a value of 1 with the relative sharpness of each other image being a value greater than 0 and less than 1. While normalized focus scores 802 are shown as varying between 0 and 1, in other examples, focus scores can be normalized to any suitable bounds, such as -1 and 1, -10 and 10, 0 and 10, etc.
FIG. 9 is a flow diagram of method 900, which is a method of generating labeled data for training a computer-implemented machine-learning model. More specifically, method 900 is a method of generating image regions labeled with a relative focus score. Method 900 includes steps 902-916 of performing autofocusing with an autofocusing algorithm (step 902), capturing z-stacked images of one slide (step 904), creating set(s) of image regions (step 906), generate sharpness scores with a sharpness measure algorithm (step 908), identify the highest sharpness score for each set of image regions (step 910), generating normalized sharpness scores (step 912), labeling the image regions with normalized sharpness scores (step 914) and training a computer-implemented machine-learning model (step 916). Method 900 can be used to process images to create, for example, image region set 716 (FIGS. 7A-7B) and, further, to generate normalized focus scores 802 (FIG. 8), as described previously. Method 900 is discussed generally herein with respect to image analysis system 10, but in other examples, method 900 can be performed by any suitable system with computational hardware capable of performing automated image processing and labeling according to method 900.
In step 902, autofocusing of a histological image slide is performed. The histological slide used in method 900 includes a tissue sample which can optionally be stained or dyed using a suitable staining technique. In at least some examples, the sample imaged in an iteration of method 900 is an H&E-stained sample. Step 902 can be performed in substantially the same manner as step 602 of method 600 (FIG. 6), and the discussion of step 602 herein is applicable to step 902. Step 902 is an optional step of method 900 that can be performed to increase the likelihood that one Z-layer of the Z-stacked images captured in subsequent step 904 is sufficiently in-focus to be useful for normalization of sharpness scores in the subsequent steps of method 900. That is, insofar as all sharpness scores are normalized in method 900 to the highest sharpness score of the sharpest image, having at least one in-focus image that can be used as the sharpest image for normalization increases the quality of the resultant labeled image data for training a computer- implemented machine-learning model. An auto-focusing algorithm can be used to locate a Z-distance between objective lens 186 and stage 178 that is likely to be in-focus, and the other Z-distances used to in step 904 can be intervals of a regular or otherwise predetermined distance from the in-focus Z-distance determined by the auto-focusing algorithm.
In step 904, image scanner 160 captures Z-stacked images of one slide on which a histological sample is mounted. Image scanner 160 captures the Z-stacked images by capturing a plurality of images and, in particular, by capturing each image at a different Z-distance between objective lens 186 and stage 178, such that the distance between objective lens 186 and the slide (and the sample mounted thereon) is different when each image of the set of Z-stacked images is captured. System controller 100 and/or motion controller 174 controls the position of at least one of objective lens 186 or stage 178 (i.e., via objective lens positioner 198 and stage positioner 199) to change the Z-distance between objective lens 186 and stage 178 for each image captured. In some examples, the images captured in step 904 are whole-slide images capturing an entirety of the sample mounted to the imaged slide and/or the entire area of the slide.
Generally, at least one image in the set of Z-stacked images is relatively infocus and at least one image of the set of Z-stacked images is relatively out-of-focus, such that a the images of the set of Z-stacked represent both in-focus and out-of-focus image data. Further, including at least one in-focus image in the set of Z-stacked images allows the highest sharpness score identified in subsequent set 910 to correspond to an in-focus image, such that the highest normalized sharpness scores (having a value, e.g., of 1) generated in subsequent step 912 also corresponds to an in-focus image. In at least some examples, one image of the set of Z-stacked images is an in-focus image and the remainder of the set of Z-stacked images are at least partially out-of-focus and are out-of-focus to varying degrees.
The set Z-stacked images can be captured first by identifying a Z-distance for capturing an in-focus image and by capturing an image at that Z-distance, and then by varying the Z-distance from the in- focus Z-distance to capture images that are out-of-focus to varying degrees. In some examples, step 904 can be performed by capturing a first image at the Z-distance identified in step 902 and capturing subsequent images at further and/or closer (i.e., larger and smaller) Z-distances. The Z-distances can vary by, for example, 1 micrometer, such that each Z-layer of the Z-stacked images is taken 1 micrometer from adjacent Z-layers of the Z-stacked images. In at least some examples, step 904 is performed by capturing 13 images. System controller 100 can, for example, cause image scanner 160 to capture a first image at the distance identified in step 902, and then to capture six additional images while decreasing the Z-distance between objective lens 186 and stage 178 by 1 micrometer before each additional image is captured. System controller 100 can then position objective lens 186 and stage 178 to be separated by a Z-distance that is 1 micrometer greater than the distance identified in step 902, and capture 6 additional images while increasing the Z-distance between objective lens 186 and stage 178 by 1 micrometer after each additional image is captured. All 13 resultant images are captured at intervals of 1 micrometer, with the “central” image being predicted to be in-focus by the auto-focusing algorithm. Accordingly, the resultant set of Z-stacked images includes natural blur resulting from overly small as well as overly large Z-distances between objective lens 186 and stage 178.
In step 906, system controller 100 creates one or more sets of image regions from the Z-stacked images captured in step 904. The image regions of each set of image regions have corresponding X-coordinate and Y-coordinate extents such that the image regions of each set of image regions represent the same portion of the imaged sample. Each image region of the set of image regions is derived from a different image of the Z-stacked images captured in step 904 such that each image region of a given set of image regions corresponds to a different Z-distance between the imaged sample and objective lens 186 (and, accordingly, a different Z-distance between stage 178 and objective lens 186). In at least some examples, such as the example discussed previously with respect to FIGS. 7A- 7B, the X and Y coordinate extents of each image of the Z-stacked images collected in step 704 are substantially the same, such that the X-coordinate and Y-coordinate ranges each image of a given set of image regions are identical. Further, in at least some examples, the X and Y coordinate extents of the images of different sets of image regions are different, such that each set of image regions corresponds to a different and/or non-overlapping portion of the image histological sample. Processor 102 of system controller 100 can, for example, execute the program(s) of segmentation module 110 to define image regions and create image region information.
In at least some examples, system controller 100 can also exclude image regions and sets of image regions that do not include tissue by using the program(s) of tissue detection module 120. Tissue exclusion can be performed in substantially the same manner as described previously with respect to the discussion of step 612 of method 600 (FIG. 6). Any suitable image region within each set of image regions can be used for analysis by tissue detection module 120. In some examples, tissue detection can be performed using image regions derived from an in-focus image or a relatively in-focus image of the set of Z-stacked images. For example, in step 906, the image of the set of Z- stacked images taken at the Z-distance identified in step 902 can first be segmented using segmentation module 110 and each region within that image can be analyzed for tissue content using tissue-detection module 120. Image regions can then be generated from the other images of the set of Z-stacked images by segmentation module 110 using the X- coordinate information and Y-coordinate information for each tissue-containing region defined within the auto-focused image (i.e., the image taken at the Z-distance identified in step 902), and image regions corresponding to the same X-coordinate and Y-coordinate extents can be associated into sets of image regions.
In some examples, segmentation module 110 can segment the images of the Z-stacked images using the same grid (e.g., as described previously in the discussion of step 610 of method 600; FIG. 6) such that each image of the Z-stacked images is divided into the same number of image regions. Each resultant image region corresponding to the same X-coordinate extent and the same Y-coordinate extent can be arranged or assigned into a set, such that the X-coordinate extents and Y -coordinate extents of all image regions in a given set of image regions is the same. In at least some examples, each image can be divided into approximately 8,000 regions.
Steps 908-914 are performed for each image region set created in step 906. Each image region set can be processed according to steps 908-914 in series and/or in parallel.
In step 908, the program(s) of data labeling module 150 creates sharpness scores for each image region in the set of image regions. More specifically, data labeling module 150 creates a sharpness score for each image region by scoring each image region with a sharpness measure algorithm. The sharpness measure algorithm can be, for example, a Laplacian sharpness measure or a Tenengrad sharpness measure.
In step 910, the program(s) of data labeling module 150 identifies the highest sharpness score generated in step 908. The highest sharpness score identified in step 910 belongs to the sharpest image region scored for the set of image regions in step 908. As described previously and with respect to the discussion of FIGS. 7A-7B, the highest sharpness score is the sharpness score with the highest value in examples where, according to the sharpness measure algorithm used in step 908, higher sharpness scores indicate a sharper image. However, other sharpness scoring schemes are contemplated herein and in, for example, a sharpness scoring scheme where lower sharpness scores are produced from sharper images, the highest sharpness score identified in step 910 can be the sharpness score having the lowest value.
In step 912, the program(s) of data labeling module 150 generate normalized sharpness scores. The normalized sharpness scores are normalized to the sharpness score generated in step 910. More specifically, each sharpness score generated in step 908 (including the sharpness score identified in step 910) is normalized to the sharpness score identified in step 910. The sharpness scores can be normalized to vary between 0 and 1, like normalized sharpness scores 802 (FIG. 8), by dividing each sharpness score by the highest sharpness score identified in step 910. The sharpness scores can also be normalized to vary between any other suitable numbers, such as -1 and 1, 0 and 10, -10 and 10, etc.
In step 914, the image regions are labeled with their normalized sharpness scores (i.e., the normalized sharpness scores generated in step 912). Each image for which a normalized sharpness score was generated in step 912 (i.e., each image for which a sharpness score was generated in step 908) is labeled with the normalized sharpness score for the image by associating the normalized sharpness score with the image (or an identifier for the image, such as a filename) in, for example, a table, list, etc. The labeled image data can be stored to, for example, memory 104 or another suitable computer-readable memory device for use with subsequent step 916 of method 900.
Steps 908-914 can be repeated for any suitable number of image regions. Steps 902-914 and/or steps 904-914 can be repeated to generate training data from any suitable number of slides. In at least some examples, 29 slides can be processed according to steps 902-914 and/or steps 904-914 with 13 Z-stacked images created and processed per slide to create a total of 2,911,000 labeled image regions. Of the 2,911,000 labeled image regions, in at least some examples, 2,376,000 labeled image regions are used as training data and 535,000 are used as test data in subsequent step 916. The aforementioned embodiment exemplifies how method 900 can be used to produce large volumes of labeled data suitable for training a computer implemented machine-learning model from a relatively small pool of histological images.
In step 916, a computer-implemented machine-learning model is trained using the labeled data generated in step 914. A portion of the labeled data can be designated as training data and the remainder of the labeled data can be used as test data. The training data can be used to iteratively adjust at least one parameter of the untrained computer- implemented machine-learning model. After each training cycle, the fit of the model to the test data can be evaluated by, e.g., a human operator. Step 916 produces a trained computer- implemented machine-learning model suitable for predictively generating normalized focus scores for image regions. A computer-implemented machine-learning model trained during step 916 can be used, for example, to generate the focus scores in step 614 of method 600. Any suitable supervised training method can be used in step 916, including method 1100, as described in more detail subsequently in the discussion of FIG. 11. The model trained in step 916 can be a neural network and, in some examples, can be a convolutional neural network including at least one convolutional layer.  Method 900 advantageously creates labeled image data having real blur (i.e., blur derived from actual out-of-focus microscope images). As described previously, the existing methods for training computer-implemented machine-learning models to score image focus use artificially blurred images. In particular, existing methods often use gaussian blurring algorithms, which apply a gaussian function to image data (e.g., by convolving the image with a gaussian function). As real out-of-focus histological slide images do not typically follow the blurring pattern that results from these artificial blurring methods, using artificially-blurred data to train a computer-implemented machine-learning model can significantly reduce the accuracy of the trained model in detecting blur in real histological images. By contrast, method 900 advantageously allows training data to be generated from images with real blur (i.e., produced via step 904) via Z-stacking and labeling as set forth above, increasing the real-world accuracy of focus or blur scoring performed by a computer-implemented machine-learning trained using data generated via method 900.
Further, the Z-stacking performed by method 900 also advantageously enables the normalization scheme performed in steps 908-912. In particular, the use of Z- stacked images allows each image region to be normalized to the sharpest corresponding image region in the Z-stack, thereby creating data that varies within a limited range, increasing the accuracy of predictions made by machine- learning models trained using data created by method 900. Further, method 900 uses a similar segmentation and tissuedetection scheme as discussed previously with respect to method 600 (FIG. 6), and confers the same advantages as discussed with respect to method 600, including advantages related to reduced computational cost. Notably, the use of image segmentation by segmentation module 110 and, optionally, tissue-detection by tissue detection module 120 creates image data that reduce computation cost during both training and testing of computer- implemented machine-learning models.
FIG. 10 is a flow diagram of method 1000, which is a method of generating composite images that can be included in labeled data used to train a computer- implemented machine-learning model to evaluate the focus of image data. Method 1000 includes steps 1002-1012 of receiving image regions belonging to the same set of image regions (step 1002), defining subregions for compositing (step 1004), compositing one image region with subregion(s) from other image region(s) of the image region set (step 1006), generating a composite normalized sharpness score (step 1008), labeling the composite image with the composite normalized sharpness score (step 1010), and adding the composite image to the labeled data used for training a computer-implemented machine-learning model (step 1012). Method 1000 is discussed generally herein with respect to image analysis system 10, but in other examples, method 1000 can be performed by any suitable system with computational hardware capable of performing automated image processing and labeling according to method 1000.
In step 1002, processor 102 receives image regions belonging to a single set of image regions. The image regions can be generated according to step 906 of method 900 (FIG. 9). The images received in step 1002 that depict the same area of the image sample and can, for example, belong to the same X-coordinate extents and Y -coordinate extents of different images of a set of Z-stacked images, as described previously with respect to step 906 of method 900 (FIG. 9). At least two image regions of a single set of image regions are received in step 1002, but more than two images can be received in examples where more than two images are used to create a composited image.
In step 1004, processor 102 defines subregions of the received image regions for compositing. The subregions have the same X-coordinate extent and Y- coordinate extent (i.e., they depict the same portion of the imaged sample) and belong to different image regions of the image regions received in step 1002. The subregions defined in step 1004 are generally defined in pairs having the same X-coordinate extent and Y- coordinate extent, such that one subregion can be replaced with the other subregion of the pair to create a composite image region in subsequent step 1006. In examples where more than two image regions are used for compositing, one image region can be designated as a “primary” or “main” image into which data from the other image regions is composited. The “primary” or “main” image includes one subregion of all subregion pairs designated in step 1004. (i.e., a target region for image compositing).
In step 1006, processor 102 composites an image region with the subregions defined in the other subregions. In particular, for each pair of subregions defined in step 1004, the image data of one image region corresponding to the subregion is replaced with image data from another image region having the same X-coordinate extent and Y- coordinate extent (i.e., the pair of the subregion as defined in step 1004). Step 1006 creates an image region including at least one composited subregion derived from at least one other image region. As described previously, all image regions used for compositing belong to a single set of image regions as created during step 906 of method 900 (FIG. 9).
In step 1008, processor 102 creates a composite normalized sharpness score for the composite image. The composite normalized sharpness score is a weighted average of the normalized sharpness scores for the image regions used to generate the composite image in step 1006, with normalized sharpness scores weighted according to relative pixel area occupied by each contributory image region in the composite image. That is, the composite normalized sharpness score is a weighted average where each normalized sharpness score for each image region used to generate the composite image in step 1006 is weighted proportionately according to the pixel area belonging to that image region in the composite image.
In step 1010, processor 102 labels the composite image with the normalized sharpness score generated in step 1008. Step 1010 associates the composite image generated in step 1006 with the score generated in step 1008. Step 1010 is substantially similar to step 914 of method 900 (FIG. 9) and the discussion herein of step 914 is applicable to step 1010 of method 1000.
In step 1012, processor 102 adds the labeled composite image to the labeled data created in at least one iteration of method 900. The labeled composite image can subsequently be used for training a computer-implemented machine-learning model as described with respect to step 916 of method 900 (FIG. 9) and/or as described subsequently with respect to method 1100 (FIG. 11).
The labeled composite images produced by method 1000 can be used during training to provide training data that includes non-uniform focus. Advantageously, the use of training data having a non-uniform focus can more accurately approximate real out-of- focus histological images than training data having uniform focus and, accordingly, can be used to improve the accuracy of computer-implemented machine-learning models used for out-of-focus detection (e.g., via method 600; FIG. 6). Method 1000 can be repeated any suitable number of times using any suitable number of image regions to create any suitable number of labeled composite images that can be used to train computer-implemented machine-learning models.
FIG. 11 is a flow diagram of method 1100, which is a method of training a machine-learning algorithm according to the present disclosure. Method 1100 can be used to train machine-learning models to generate focus scores using labeled data generated in step 914 of method 900. Method 1100 includes steps 1101-1106 of receiving labeled image data (step 1101) generating training data and test data (step 1102), training a computer- implemented machine-learning model with the training data (step 1104), and testing the trained computer-implemented machine-learning model with the test data (step 1106). Machine-learning algorithms trained according to method 1100 are capable of accepting image data as an input and are configured to output values that represent or otherwise indicate the degree to which an input image is in-focus and/or out-of-focus. Method 1100 can be used to train any suitable machine-learning model and can be used, for example, to train a neural network to score image focus. In at least some examples, method 1100 can be used to train a convolutional neural network having at least one convolutional layer. Method 1100 is described herein generally with respect to image analysis system 10 (FIGS. 1-2) and the use of labeled data generated by method 900 (FIG. 9) for explanatory clarity and convenience. However, method 1100 can be used by any suitable system with any suitable labeled image data, including labeled image data generated by a method other than method 900, to generate trained computer-implemented machine-learning models for determining the degree or extent to which image data is in-focus and/or out-of-focus.
In step 1101 , labeled image data i s received by proces sor 102. Processor 102 can receive the labeled image data by generating the image data in step 914 (i.e., in examples where the labeled images are images labeled with relative sharpness scores). Processor 102 can also receive the labeled image data by, for example, manual entry by a human operator, retrieval from a database, retrieval from a non-volatile computer-readable memory device, etc. Each image of the image data received in step 1101 is labeled with a numeric value describing the degree to which the image is in-focus and/or out-of-focus. In at least some examples, the labeled image data is data generated according to steps 902- 914 and/or steps 904-914 of method 900 (FIG. 9).
In step 1102, processor 102 generates labeled training data and labeled test data. The labeled training data and labeled test data can be generated from, for example, the labeled image data generated in step 1101. More specifically, a portion (e.g., a majority) of the labeled image data can be designated as training data and a remainder (e.g., a minority) of the labeled image data can be designated as test data. In other examples, the labeled image data received in step 1101 can be used as training data and additional image data of the same kind as the labeled image data can be used as test data. In these examples, the accuracy or fit of the computer-implemented machine-learning model to the test data can be evaluated via a trained human operator and/or by one or more programs executed by processor 102.
In step 1104, the training data is used to train the computer-implemented machine-learning model. As used herein, “training” a computer-implemented machinelearning model refers to any process by which parameters, hyperparameters, weights, and/or any other value related model accuracy are adjusted to improve the fit of the computer-implemented machine-learning model to the training data. Training in step 1104 can be performed iteratively to iteratively adjust and improve the fit of the model to the training data.
In step 1106, the trained computer-implemented machine-learning model is tested with test data. More specifically, a human or machine operator can evaluate the performance of the machine-learning model by evaluating the fit of the model to the test data. The fit of the model can be evaluated to all or a portion of the test data generated in step 1102. As depicted in FIG. 11, steps 1104 and 1106 can be performed iteratively to improve the performance of the machine-learning model. More specifically, if the fit of the model to the test data determined in an iteration of step 1106 is undesirable, step 1104 can be repeated to further adjust the parameters, hyperparameters, weights, etc. of the model (e.g., via re-training) to improve the model. Step 1106 can then be repeated with a new set of test data (e.g., a different subset of the test data generated in step 1102) and/or the same set of test data to determine how the adjusted model fits the test data. If the fit continues to be undesirable, further iterations of steps 1104 and 1106 can be performed until the fit of the model becomes desirable.
Method 1100 advantageously allows for the use of computer-implemented machine-learning models to be used to score the degree to which image data is in-focus and/or out-of-focus in step 614 of method 600 (FIG. 6). The use of a computer-implemented machine-learning model trained according to method 1100 to identify the degree to which image data is focused (e.g., by generated a focus score representative of the degree to which image data is in-focus or out-of-focus) offers significantly more accuracy than conventional methods of scoring the focus of image data to do not use computer-implemented machinelearning models.
FIGS. 12A-12C represent different embodiments of line scan camera sensors suitable for use as or as part of line scan camera 192 and/or camera 194 (FIG. 2). FIG. 12A is a schematic diagram of linear array 1240, which is an example of a sensor for a line scan camera. Linear array 1240 is a single linear array that includes a plurality of individual pixel elements 1245. Pixel elements 1245 are photo sensor components of linear array 1240. In the depicted example, the single linear array 1240 has 4096 pixels. In other examples, linear array 1240 may have more or fewer pixels. For example, common formats of linear arrays include 512, 1024, and 4096 pixels. Pixel elements 1245 are arranged in a linear fashion and define field of view 196 in combination with the magnification of the image scanner 160 (e.g., of objective lens 186). In some examples, linear array 1240 is a charge coupled device (“CCD”) array.
FIG. 12B is a schematic diagram of color array 1250, which is another example of a sensor for a line scan camera (e.g., line scan camera 192). Color array 1250 includes three linear arrays, each of which may be implemented as a CCD array, and which combine to form color array 1250. Each individual linear array in color array 1250 detects a different color intensity, for example, red, green, or blue. The color image data from each individual linear array in the color array 1250 is combined to form a single field of view (e.g., field of view 196) of color image data.
FIG. 12C is a schematic diagram of TDI array 1255, which is another example of a sensor for a line scan camera (e.g., line scan camera 192). TDI array 1255 includes a plurality of linear arrays, each of which may be implemented as a CCD array, and which combine to form TDI array 1255. As described previously, TDI array sensors, such as TDI array 1255, advantageously have improved SNR as compared to other linear array sensors by summing intensity data from previously imaged portions of a specimen, yielding an increase in the SNR that is in proportion to the square-root of the number of linear arrays (also referred to as integration stages). TDI array 1255 can include any suitable number of linear arrays such as, for example, 24, 32, 48, 64, 96, or 120 linear arrays.
Although the present disclosure makes reference to preferred embodiments, it will be understood by those skilled in the art that various changes may be made in form and in detail, and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present disclosure without departing from the essential scope thereof.
DISCUSSION OF POSSIBLE EMBODIMENTS
The following are non-exclusive descriptions of possible embodiments of the present invention.
A method of automated slide image focus evaluation including receiving an image scan of a histological slide, segmenting the image scan into a plurality of regions, generating a focus score for each region of the plurality of regions using a trained computer- implemented machine-learning model configured to accept image data as an input and to output a value describing an extent to which the image data is in focus, determining a number of regions of the plurality of regions having focus scores that satisfy a threshold focus score, generating a determination of whether the image scan is acceptable for diagnostic use based on whether the number of regions satisfies a threshold number of regions, and generating an indication of the determination of whether the image scan is acceptable for diagnostic use.
The method of the preceding paragraph can optionally include, additionally and/or alternatively, any one or more of the following features, configurations and/or additional components or steps:
A method as set forth above, and further comprising generating a predicted in-focus distance between a lens of a histological image scanner and a stage on which the histological slide is placed using an autofocusing algorithm, adjusting a distance between the lens and the stage to be the predicted in-focus distance, and capturing the image scan of the histological slide while the distance between the lens and the stage is the predicted in-focus distance.
A method as set forth above, wherein the autofocusing algorithm is different from the trained computer-implemented machine-learning model and the predicted in-focus distance is predicted by the autofocusing algorithm to position a focal plane of the lens for generating a focused image scan of the histological slide.
A method as set forth above, and further comprising outputting, by a user interface of a user device, a user-understandable representation of the indication.
A method as set forth above, wherein generating the determination of whether the image scan is acceptable comprises determining that the image scan is not acceptable.
A method as set forth above, and further comprising capturing the image scan using an imaging system and prompting, via a user interface of a user device, a user of the user device with one or more software buttons that, when selected by the user, cause the imaging system to capture a new image scan of the histological slide.
A method as set forth above, wherein generating the determination of whether the image scan is acceptable comprises determining that the image scan is not acceptable.
A method as set forth above, and further comprising capturing the image scan using an imaging system and automatically causing, after determining that the image scan is not acceptable, the imaging system to capture a new image scan of the histological slide.
A method as set forth above, wherein the threshold number of regions is determined using a percent coverage threshold and a numerosity of the plurality of regions.  A method as set forth above, wherein the threshold number of regions is a threshold number of contiguous regions.
A method as set forth above, wherein the image scan is a whole-slide image scan.
A method as set forth above, wherein all regions of the plurality of regions are equally-sized.
A method as set forth above, wherein each region of the plurality of regions is rectangular.
A method as set forth above, and further comprising generating labeled image data by labeling training image regions with focus scores, generating training data from a portion of the labeled image data and generating test data from a remainder of the labeled image data, and iteratively training a computer-implemented machine-learning model to generate the trained computer-implemented machine-learning model configured to generate values describing an extent to which the image data is in focus, wherein iteratively training the computer-implemented machine-learning model comprises iteratively adjusting at least one parameter of the computer-implemented machine-learning model to iteratively adjust a fit of the computer-implemented machine-learning model to the training data.
A method as set forth above, and further comprising generating test data from a remainder of the labeled image data and testing a fit of the trained computer- implemented machine-learning model to the test data.
A method as set forth above, and further comprising analyzing each region of the plurality of regions using a tissue detection algorithm to determine whether a threshold pixel area of tissue is present, and wherein generating the focus score for each region of the plurality of regions comprises generating the focus score for each region of the plurality of regions having greater than the threshold pixel area of tissue.
A method as set forth above, wherein the threshold number of regions is determined using a percent coverage threshold and a numerosity of the plurality of regions having greater than the threshold pixel area of tissue.
A system for focus evaluation of a histological slide includes a histological image scanner, a processor operatively connected to the histological image scanner, and at least one computer-readable memory. The at least one computer-readable memory is encoded receive, from the histological image scanner, an image scan of a histological slide, segment the image scan into a plurality of regions, generate a focus score for each region of the plurality of regions using a trained computer-implemented machine-learning model configured to accept image data as an input and to output a value describing an extent to which the image data is in focus, determine a number of regions of the plurality of regions having focus scores that satisfy a threshold focus score, generate a determination of whether the image scan is acceptable for diagnostic use based on whether the number of regions satisfies a threshold number of regions, and generate an indication of the determination.
The system of the preceding paragraph can optionally include, additionally and/or alternatively, any one or more of the following features, configurations and/or additional components:
A system as set forth above, wherein the instructions, when executed, further cause the processor to generate a predicted in-focus distance between a lens and a stage on which the histological slide is placed using an autofocusing algorithm, cause the histological image scanner to adjust a distance between the lens and the stage to be the predicted in-focus distance, and cause the histological image scanner to capture the image scan of the histological slide while the distance between the lens and the stage is the predicted in-focus distance.
A system as set forth above, and further comprising a user interface, and wherein the instructions, when executed, cause the processor to, upon determining that the image scan is not acceptable, cause the user interface to prompt a user with one or more software buttons that, when selected by the user, cause the imaging system to capture a new image scan of the histological slide.
A system as set forth above, wherein the instructions, when executed, cause the processor to, upon determining that the image scan is not acceptable, automatically cause the histological image scanner to capture a new image scan of the histological slide.
A system as set forth above, wherein the instructions when executed, cause the processor to receive labeled image data including training data and test data, the labeled image data comprising training image regions labeled with focus scores and iteratively train a computer-implemented machine-learning model to generate the trained computer- implemented machine-learning model configured to generate focus scores for image regions, wherein iteratively training the computer-implemented machine-learning model comprises iteratively adjusting at least one parameter of the computer-implemented machine-learning model to iteratively adjust a fit of the computer- implemented machinelearning model to the training data.  A method of automated slide image focus evaluation of histological images including adjusting a distance between a lens and a stage a lens and a stage on which a histological slide is placed to a predicted in-focus distance generated by an autofocusing algorithm, capturing an image scan of the histological slide while the distance between the lens and the stage is the predicted in-focus distance, segmenting the image scan into a plurality of regions, generating a focus score for each region of the plurality of regions using a trained computer-implemented machine-learning model configured to accept image data as an input and to output a value describing an extent to which the image data is in focus, determining a number of regions of the plurality of regions having focus scores that satisfy a threshold focus score, generating a determination whether the image scan is acceptable for diagnostic use based on whether the number of regions satisfies a threshold number of regions, and generating an indication of the determination of whether the image scan is acceptable for diagnostic use.
The method of the preceding paragraph can optionally include, additionally and/or alternatively, any one or more of the following features, configurations and/or additional components or steps:
A method as set forth above, wherein generating the determination of whether the image scan is acceptable comprises determining that the image scan is not acceptable, and capturing the image scan comprises capturing the image scan using an imaging system, and further comprising prompting, via a user interface of a user device, a user of the user device with one or more software buttons usable to cause the imaging system to capture a new image scan of the histological slide.
A method as set forth above, wherein generating the determination of whether the image scan is acceptable comprises determining that the image scan is not acceptable, and capturing the image scan comprises capturing the image scan using an imaging system, and further comprising automatically causing, after determining that the image scan is not acceptable, the imaging system to capture a new image scan of the histological slide.
While the invention has been described with reference to an exemplary embodiment(s), it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment(s) disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.