Movatterモバイル変換


[0]ホーム

URL:


US5432865A - Method and apparatus for generating a plurality of parameters of an object in a field of view - Google Patents

Method and apparatus for generating a plurality of parameters of an object in a field of view
Download PDF

Info

Publication number
US5432865A
US5432865AUS07/822,190US82219092AUS5432865AUS 5432865 AUS5432865 AUS 5432865AUS 82219092 AUS82219092 AUS 82219092AUS 5432865 AUS5432865 AUS 5432865A
Authority
US
United States
Prior art keywords
pixel
representation
value
image
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US07/822,190
Inventor
Harvey L. Kasdan
John Liberty
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Iris International Inc
Original Assignee
International Remote Imaging Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Remote Imaging Systems IncfiledCriticalInternational Remote Imaging Systems Inc
Priority to US07/822,190priorityCriticalpatent/US5432865A/en
Application grantedgrantedCritical
Publication of US5432865ApublicationCriticalpatent/US5432865A/en
Assigned to INTERNATIONAL REMOTE IMAGING SYSTEMS, INC.reassignmentINTERNATIONAL REMOTE IMAGING SYSTEMS, INC.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: INTERNATIONAL REMOTE IMAGING SYSTEMS, INC., A CALIFORNIA CORPORATION
Assigned to CITY NATIONAL BANKreassignmentCITY NATIONAL BANKSECURITY INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: INTERNATIONAL REMOTE IMAGING SYSTEMS, INC.
Assigned to FOOTHILL CAPITAL CORPORATIONreassignmentFOOTHILL CAPITAL CORPORATIONSECURITY AGREEMENTAssignors: INTERNATIONAL REMOTE IMAGING SYSTEMS, INC.
Assigned to CITY NATIONAL BANKreassignmentCITY NATIONAL BANKRELEASE OF SECURITY AGREEMENTAssignors: INTERNATIONAL REMOTE IMAGING SYSTEMS, INC.
Anticipated expirationlegal-statusCritical
Expired - Lifetimelegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

A method and an apparatus for generating a plurality of parameters of an object in a field of view is disclosed. An electrical image of the field of view is formed. The electrical image is processed to form a plurality of different representations of the electrical image where each different representation is a representation of a different parameter of the field of view. The positional information, which represents the boundary of the object, is generated. In response to the positional information that represents the boundary of the object being generated, corresponding locations in each of the different representations is traced. The different parameters from each of the different represents are calculated as the locations are traced in each of the different representations.

Description

This is a continuation of application Ser. No. 07/350,400, filed May 11, 1989, now U.S. Pat. No. 5,121,436, which is a continuation-in-part application of application, Ser. No. 085,985, filed on Aug. 14, 1987, now abandoned, assigned to the present assignee.
TECHNICAL FIELD
The present invention relates to a method and an apparatus for generating a plurality of parameters of an object in a field of view, and more particularly, to a method and apparatus where a plurality of parameters of an object in a field of view are determined in response to positional information representing the boundary of the object being provided.
BACKGROUND ART
Microscopical image analysis is well known in the art. See, for example, U.S. Pat. No. 4,097,845. The purpose of image analysis is to determine particular qualities of objects in the field of view. In particular, in the case of image analysis of microscopical samples such as biopsies, blood or urine, it is highly desirable to determine properties of the particles in view such as: area, mass density, shape, etc. However, in order to determine the particular parameter of the particles in the object of view, the boundary of the particles must first be located.
In U.S. Pat. No. 4,097,845, a method of locating the boundary of particles is described using the technique of "neighbors of the neighbors".
U.S. Pat. No. 4,060,713 also discloses an apparatus for processing two-dimensional data. In that reference, an analysis of the six nearest neighbors of an element is made.
U.S. Pat. No. 4,538,299 discloses yet another method for locating the boundary of a particle in the field of view.
A urinalysis machine manufactured and sold by International Remote Imaging Systems, Inc. under the trademark the Yellow IRIS™ has used the teaching of the '299 patent to locate the boundary of a particle and thereafter to determine the area of the particle. However, The Yellow IRIS™ used the positional information of the boundary of a particle to determine only a single parameter of the particle. Further, The Yellow IRIS™ did not generate a representation that is a representation of the parameter of area in the field of view, which is separate and apart from the image containing the representation that has the boundary of the particle.
SUMMARY OF THE INVENTION
A method and apparatus is provided for generating a plurality of parameters of an object in a field of view. The apparatus has imaging means for forming an electrical image of the field of view. Means is provided for segmenting the electrical image to form a plurality of different representations of the electrical image wherein each different representation is a representation of a different parameter of the field of view. Generating means provides the positional information that represent the boundary of the object. Tracing means locates positions in each of the different representations in response to the positional information generated. Finally, calculating means provides the different parameters from each of the different representations based upon location traced in each of the different representations.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a schematic block diagram of an imaging system of the present invention.
FIG. 2 is a block diagram of the video image processor of the imaging system of the present invention, shown with a plurality of modules, and a plurality of data buses.
FIG. 3 is a schematic block diagram of the portion of each module of the video image processor with communication means, and logic control means to interconnect one or more of the data buses to the module.
FIG. 4 is a detail circuit diagram of one implementation of the logic unit shown in FIG. 3.
FIGS. 5(a-c) are schematic block diagrams of various possible configurations connecting the modules to the data buses.
FIG. 6 is a schematic block diagram of another embodiment of a video image processor shown with a plurality of data buses which can be electronically switched.
FIG. 7 is a schematic block diagram of the portion of the video image processor shown in FIG. 6 showing the logic unit and address decode unit and the switching means to electronically switch the data buses of the video image processor shown in FIG. 6.
FIGS. 8(a-c) show various possible embodiments as a result of switching the data buses of the video image processor shown in FIG. 6.
FIG. 9 is a detail circuit diagram of a portion of the switch and logic unit of the video image processor shown in FIG. 6.
FIG. 10 is a schematic block diagram of the video processor module of the video image processor shown in FIGS. 2 or 6.
FIG. 11 is a schematic block diagram of an image memory module of the video image processor shown in FIGS. 2 or 6.
FIG. 12 is a schematic block diagram of a morphological processor module of the video image processor shown in FIGS. 2 or 6.
FIG. 13 is a graphic controller module of the video image processor shown in FIGS. 2 or 6.
FIG. 14 is a block schematic diagram of the master controller of the video image processor shown in FIGS. 2 or 6.
FIG. 15 is a circuit diagram of another implementation of a logic unit.
FIG. 16 is an example of a digitized image of the field of view with a particle contained therein.
FIG. 17 is an example of the electrical image shown in FIG. 16 processed to form a representation of the electrical image which is a representation of the area of the field of view.
FIG. 18 is an example of the electrical image shown in FIG. 16 processed to form a representation of the electrical image which is a representation of the integrated optical density of the field of view.
FIG. 19 is an example of the electrical image shown in FIG. 16 processed to form a first representation containing the boundary of the object in the field of view, in accordance with the method as disclosed in U.S. Pat. No. 4,538,299.
FIG. 20 is an example of the calculation of the area of the object in the field of view of the example shown in FIG. 16 in accordance with the method of the present invention.
FIG. 21 is the calculation of the integrated optical density of the object in the field of view of the example shown in FIG. 16 in accordance with the method of the present invention.
DETAILED DESCRIPTION OF THE DRAWINGS
Referring to FIG. 1 there is shown animaging system 8 of the present invention. Theimaging system 8 comprises avideo image processor 10, which receives analog video signals from acolor camera 12. Thecolor camera 12 is optically attached to afluorescent illuminator 14 which is focused through amicroscope 16 and is directed at astage 18. A source ofillumination 20 provides the necessary electromagnetic radiation. Thevideo imaging processor 10 communicates with ahost computer 22. In addition, thehost computer 22 hassoftware 24 stored therein to operate it. Finally, a full colormonitor display device 26 receives the output of thevideo image processor 10.
There are many uses for thevideo image processor 10. In the embodiment shown in FIG. 1, theimaging system 8 is used to analyze biological specimens, such as biopsy material, or constituents of blood. The biological specimen is mounted on a slide and is placed on thestage 18. The video image of the slide as taken by thecolor camera 12 through themicroscope 16 is processed by the video image processor.
In preferred embodiment, thehost computer 22 is a Motorola 68000 microprocessor and communicates with thevideo image processor 10 via a Q-bus. The Q-bus is a standard communication protocol developed by Digital Equipment Corporation.
As shown in FIG. 2, thevideo image processor 10 comprises amaster controller 30 and a plurality of electronic digital modules. Shown in FIG. 2 are a plurality of processor modules: thevideo processor 34,graphic controller processor 36,morphological processor 40, and a plurality of image memory modules:image memory modules 38a, 38b and 38c. The image memory modules store data which is representative of the video images. The processor modules process the data or the video images. Themaster controller 30 communicates with each one of the plurality of digital modules (34, 36, 38 and 40) via acontrol bus 32. In addition, the plurality of digital modules (34, 36, 38 and 40) communicate with one another via a plurality ofdata buses 42.
In thevideo image processor 10, themaster controller 30 controls the operation of each one of the plurality of digital modules (34, 36, 38 and 40) by passing control signals along thecontrol bus 32. Thebus 32 comprises a plurality of lines. Thebus 32 comprises 8 bit lines for address, 16 bit lines for data, 4 bit lines of control, one line for vertical sync and one line for horizontal sync. In addition, there are numerous power and ground lines. The 4 bits of control include a signal for clock, ADAV, CMD, and WRT (the function of these control signals will be described later).
The plurality ofdata buses 42, which interconnect the modules (34, 36, 38 and 40) with one another, comprise nine 8 bitwide data buses 42. The ninedata buses 42 are designated as 42A, 42B, 42C, 42D, 42E, 42F, 42G, 42H, and 42I, respectively.
Within each module (34, 36, 38 and 40) is a communication means 54. Further, within each module is a logic unit means 52 which is responsive to the control signals on thecontrol bus 32 for connecting the communication means 54 of each module to one or more of thedata buses 42.
Referring to FIG. 3 there is shown a schematic block diagram of the portion of each of the modules which is responsive to the control signals on thecontrol bus 32 for interconnecting one or more of thedata buses 42 to the communication means 54 within each of the modules. Shown in FIG. 3 is anaddress decode circuit 50. Theaddress decode circuit 50 is connected to the eight address lines of thecontrol bus 32. Theaddress decode circuit 50 also outputs asignal 56 which activates its associatedlogic unit 52. Since eachlogic unit 52 has a unique address, if the address lines present on the address decode 50 matches the address for thatparticular logic unit 52, the address decode 50 would send asignal 56 to activate thatlogic unit 52. Within each module, there can be a plurality oflogic units 52 each with an associatedaddress decoder 50. Each of the plurality oflogic units 52 can perform different tasks.
Thelogic unit 52 receives the 16 bits of data from the 16 bits of data portion of thecontrol bus 32. In addition, thelogic unit 52 can also be connected to the four control lines: clock, ADAV, CMD, WRT, as previously described, of thecontrol bus 32, and vertical sync and horizontal sync. Thelogic unit 52 will then control the operation of a plurality of tri-state transceivers 54A, 54B, 54C, 54D, 54E, 54F, 54G and 54I. This being understood that there are eight individualtri-state transceivers 54 for the group of tri-state transceivers 54A, and eight individual tri-state transceivers for the group of tri-state transceivers 54B, etc. The function of thetri-state transceivers 54 is to connect one or more of thedata buses 42A to functions within the module of which thelogic unit 52 andaddress decode circuit 50 is a part thereof. In addition, within the module, across-point switch 58 may be connected to all of the outputs of thetri-state transceivers 54 and multiplex the plurality oftri-state transceivers 54 into a single 8 bitwide bus 60.
Referring to FIG. 4 there is shown a simplistic example of theaddress decoder 50, thelogic unit 52, and one of the group of transceivers 54A interconnecting with thebus 42A. As previously stated, the eight address signal lines of thecontrol bus 32 are supplied to theaddress decoder 50. If the address supplied on the address lines of thecontrol bus 32 correctly decodes to the address of thelogic unit 52, theaddress decoder 50 sends asignal 56 going high which is supplied to thelogic unit 52. Theaddress decode circuit 50 can be of conventional design.
Logic unit 52 comprises two AND gates 62A and 62B whose outputs are connected toJ-K flipflop 64a and 64B respectively. The AND gates 62A and 62B receive at one of the inputs thereof thecontrol signal 56 from theaddress decoder 50. The other input to the AND gates 62A and 62B are from the data lines of thecontrol bus 32. If theaddress decoder 50 determines that thelogic unit 52 is to be activated, as determined by the correct address on the address lines of thecontrol bus 32, thecontrol signal 56 going high gates in to the flipflop 64A and 64B the data present on the data lines of thecontrol bus 32. The output of the J-K flipflop 64A and 64B are used to control the eight tri-state transceivers 54A0. . . 54A7. Each of the eight tri-state transceivers has as one terminal thereof connected to one of the eight bit communication paths of thebus 42A. The other terminal of each of the tri-state transceivers 54A is connected to electronic elements within the module.
The tri-state transceivers 54A, as the name suggests, has three states. The transceivers 54A can provide communication to thedata bus 42A. The tri-state transceivers 54A can provide data communication from thedata bus 42A. In addition, the tri-state transceivers 54A can be in the open position in which case no communication occurs to or from thedata bus 42A. As an example, the tri-state transceivers 54A are components manufactured by Texas Instruments designated as 74AS620. These tri-state transceivers 54A receive two inputs. If the inputs have the combination of 0 and 1, they denote communication in one direction. If the tri-state transceivers receive the inputs of 1 and 0, they denote communication in the opposite direction. If the tri-state transceivers 54A receive 0 0 on both input lines, then the tri-state transceivers 54A are in the open position. Since the tri-state transceivers 54A0. . . 54A7 are all switched in the same manner, i.e. either all eight lines are connected to thedata bus 42A, or they are not, the output of the flipflop 64A and 64B are used to control all eight transceivers to interconnect one of the data buses. Thelogic unit 52 can also comprise other flipflops and control gates to control other tri-state transceivers which are grouped in groups of eight to gang the switching of the selection of connection to one or more of theother data buses 42.
Because the interconnection of one or more of thedata buses 42 to one or more of the plurality of modules (34, 36, 38 and 40), is under the control of thecontrol bus 32, the data paths for the connection of the data buses 42 (A-I) can be dynamically reconfigured.
Referring to FIG. 5a, there is shown one possible configuration with the dynamicallyreconfigurable data buses 42. Since eachdata bus 42 is 8 bits wide, the plurality of modules (34, 36, 38 and 40) can be connected to receive data from two data buses (e.g. 42A and 42B), simultaneously. This is data processing in the parallel mode in which 16 bits of data are simultaneously processed along the data bus. Thus, thedata buses 42 can be ganged together to increase the bandwidth of data transmission.
Referring to FIG. 5b, there is another possible configuration for thedata buses 42. In this mode of operation,module 34 can transmit data ondata bus 42A tomodule 36.Module 36 can communicate data withmodule 38 along thedata bus 42B. Finally,module 38 can communicate withmodule 40 along thedata bus 42C. In this mode, which is termed pipeline processing, data can flow from one module to another sequentially or simultaneously since data is flowing on separate and unique data buses.
Referring to FIG. 5c, there is shown yet another possible configuration for thedata bus 42. In this mode the operation is known as macro interleaving. If, for example, themodule 34 is able to process or transmit data faster than themodules 36 or 38 can receive them,module 34 can send every odd data byte tomodule 36 alongdata bus 42A and every even data byte alongbus 42B to themodule 38. In this manner, data can be stored or processed at the rate of the fastest module. This is unlike the prior art where a plurality of modules must be operated at the speed of the slowest module.
Thus, as can be seen by examples shown in FIGS. 4a-4c, with a dynamically reconfigurable data bus structure, a variety of data transmission paths, including but not limited to those shown in FIGS. 4(a-c), can be dynamically and electronically reconfigured.
Referring to FIG. 6, there is shown yet another embodiment of avideo image processor 110. Thevideo image processor 110, similar to thevideo image processor 10 comprises amaster controller 130 and a plurality ofdigital modules 134, 136 (not shown), 138 (A-B) and 140. These modules, similar to themodules 34, 36, 38 and 40, perform the respective tasks of image processing and image storing. Themaster controller 130 communicates with each one of the modules via a control bus 132. Each one of the modules 134-140 is also connected to one another by a plurality ofdata buses 42A-42I. Similar to thevideo image processor 10, there are nine data buses, each bus being 8 bits wide.
The only difference between thevideo image processor 110 and thevideo image processor 10 is that along each of thedata buses 42 is interposed a switch means 154 controlled by alogic unit 152 which is activated by anaddress decode circuit 150. This is shown in greater detail in FIGS. 7 and 9. As can be seen in FIG. 6, the switch means 154A . . . 154I are interposed between the image memory module 138A and image memory module 138B. That is, the switch means 154A . . . 154I, divide thedata buses 42A . . . 42I into two sections: the first section comprising of thevideo processor module 134 and the image memory module 138A; the second part comprising themorphological processor 140 and the second image memory module 138B. The switch means 154 provide the capability of either connecting one part of thedata bus 42A to the other part or leaving the data bus open, i.e. the data bus severed.
Referring to FIGS. 8a-8c, there is shown various configurations of the possible data bus structure that results from using the switch means 154A-154I.
FIG. 8a shows ninedata buses 42A-42I, wherein the switch means 154A, 154B and 154C connect thedata buses 42A, 42B and 42C into one continuous data bus. However, the switch means 154D . . . 154I are left in the open position thereby severing thedata buses 42D . . . 42I into two portions. In this mode of operation, parallel processing can occur simultaneously using thedata buses 42D . . . 42I by themodules 134, and 138A and by themodules 138B and 140. In addition, serial or pipeline processing can occur along thedata buses 42A . . . 42C. As before, with the switch means 154A . . . 154I, dynamically selectable, total parallel processing as shown in FIG. 8b or total pipeline processing as shown in FIG. 8c are also possible. In addition, of course other configurations including but not limited to the macro interleave configuration of FIG. 5c, are also possible.
Referring to FIG. 7, there is shown a schematic block diagram of the electronic circuits used to control thedata 42A . . . 42I of thevideo image processor 110. As previously stated, a switch means 154 is interposed between two halves of eachdata bus 42. Shown in FIG. 7, is the switch means 154A interposed in thedata bus 42A and the switch means 154I interposed in the data bus 42I. Each one of the switch means 154 is controlled by thelogic unit 152 which is activated by theaddress decode circuit 150. Similar to theaddress decode circuit 50, theaddress decode 150 is connected to the eight address lines of the control bus 132. If the correct address is detected, thecontrol signal 156 is sent to thelogic unit 152. Thecontrol signal 156 activates thelogic unit 152 which in turn activates one or more of the switch means 154.
Referring to FIG. 9, there is shown a detailed simplistic schematic circuit diagram of thelogic unit 152 and the switch means 154A. As can be seen, thelogic 152 is identical to thelogic unit 52. The switch means 154 (a tri-state transceiver) interconnects one half of one of the bus lines to the other half of thebus line 42. In all other respects, the operation of the switch means 154,logic unit 152, and theaddress decode circuit 150 is identical to that shown and described for theaddress decode circuit 50,logic unit 52, and switch means 54.
As previously stated, thereconfigurable data buses 42, interconnect the plurality of modules (34, 36, 38 and 40) to one another. The modules comprise a plurality of processor modules and a plurality of memory modules. With the exception of the communication means, logic unit and address decode circuit, the rest of the electronic circuits of each module for processing or storing data can be of conventional design. One of theprocessor modules 34 is the video processor module.
Thevideo processor module 34 is shown in block diagram form in FIG. 10. Thevideo processor 34 receives three analog video signals from thecolor camera 12. The three analog video signals comprising signals representative of the red, green, and blue images, are processed by a DCrestoration analog circuit 60. Each of the resultant signals is then digitized by adigitizer 62. Each of the three digitized video signals is the analog video signal from thecolor camera 12, segmented to form a plurality of image pixels and with each image pixel digitized to form a greyscale value of 8 bits. The digitized video signals are supplied to a 6×6cross-point matrix switch 64 which outputs the three digitized video signals onto three of the six data buses (42A-42F).
From thedata buses 42A-42F, the digitized video signals can be stored in one or more of the image memory modules 38A-38C. The selection of a particular image memory module 38A-38C to store the digitized video signals is accomplished by theaddress decode circuit 50 connected to thelogic unit 52 which activates the particulartri-state transceivers 54, all as previously described. The data selection of whichdata bus 42 the digitized video images would be sent to is based upon registers in thelogic unit 52 which are set by thecontrol bus 32.
A block diagram of animage memory module 38 is shown in FIG. 11. Each of thememory modules 38 contains three megabytes of memory. The three megabytes of memory is further divided into three memory planes: an upper plane, a middle plane, and a lower plane. Each plane of memory comprises 512×2048 bytes of memory. Thus, there is approximately one megabyte of memory per memory plane.
Since each digitized video image is stored in a memory space of 256×256 bytes, each memory plane has room for 16 video images. In total, a memory module has room for the storage of 48 video images. The address of the selection of the particular video image from the particular memory plane within each memory module is supplied along thecontrol bus 32. As the data is supplied to or received from eachmemory module 38, via thedata buses 42, it is supplied to or from the locations specified by the address set on thecontrol bus 32. The three digitized video images from thevideo processor 34 are stored, in general, in the same address location within each one of the memory planes of each memory module.
Thus, the digital video signal representative of the red video image may be stored in the starting address location of x=256, y=0 of the upper memory plane; the digitized signal representative of the blue video image may be stored in x=256, y=0 of the middle memory plane; and the digital video signal representative of the green video image may be stored in x=256, y=0 of the lower memory plane.
Once the digital video signals representative of the digitized video images are stored in the memory planes of one ormore memory modules 38, the digitized video images are operated upon by themorphological processor 40.
Themorphological processor 40 receives data from thedata buses 42A-42D and outputs data to thedata buses 42E-42G. Further, themorphological processor 40 can receive input or output data to and from thedata buses 42H and 42I. Referring to FIG. 12, there is shown a schematic block diagram of themorphological processor 40. Themorphological processor 40 receives data fromdata buses 42A and 42B which are supplied to a multiplexer/logarithmic unit 70. The output of the multiplexer/logarithmic unit 70 (16 bits) is either data from thedata buses 42A and 42B or is the logarithm thereof. The output of the multiplexer/logarithmic unit 70 is supplied as the input to the ALU 72, on theinput port 71. The ALU 72 has two input ports: 71 and 75.
Themorphological processor 40 also comprises amultiplier accumulator 74. Themultiplier accumulator 74 receives data from thedata buses 42C and 42D and from thedata buses 42H and 42I respectively, and performs the operation of multiply and accumulate thereon. Themultiplier accumulator 74 can perform the functions of 1) multiplying the data from (data bus 42C ordata bus 42D) by the data from (data bus 42H or data bus 42I); or 2) multiplying the data from (data bus 42C ordata bus 42D) by a constant as supplied from the master controller. The result of that calculation is outputted onto thedata buses 421, 42H and 426. The result of the multiply accumulateunit 74 is that it calculates a Green's function kernel in realtime. The Green's function kernel is a summation of all the pixel values from the start of the horizontal sync to the then current pixel. This would be used subsequently in calculation of other properties of the image.
A portion of the result of the multiplier accumulator 74 (16 bits) is also inputted into the ALU 72, on the input port designated as a. Themultiplier accumulator 74 can perform calculations of multiply and accumulate that are 32 bits in precision. The result of themultiplier accumulator 74 can be switched by themultiplier accumulator 74 to be the most significant 16 bits or the least significant 16 bits, and is supplied to the a input of the ALU 72.
The output of the ALU 72 is supplied to abarrel shifter 76 which is then supplied to a look-up table 78 and is placed back on thedata buses 42E and 42F. The output of the ALU 72 is also supplied to aprime generator 80 which can also be placed back onto thedata buses 42E and 42F. The function of theprime generator 80 is to determine the boundary pixels, as described in U.S. Pat. No. 4,538,299.
The ALU 72 can also perform the function of subtracting data on the input port a from data on the input port b. The result of the subtraction is an overflow or underflow condition, which determines a>b or a<b. Thus, the pixel-by-pixel maximum and minimum for two images can be calculated.
Finally, the ALU 72 can perform histogram calculations. There are two types of histogram calculation. In the first type, the value of a pixel (a pixel value is 8 bits or is between 0-255) selects the address of the memory 73. The memory location at the selected address is incremented by 1. In the second type, two pixel values are provided: a first pixel value of the current pixel location and a second pixel value at the pixel location of a previous line to the immediate left or to the immediate right (i.e. diagonal neighbor). The pairs of pixel values are used to address a 64K memory (256×256) and the memory location of the selected pixel is incremented. Thus, this histogram is texture related.
In summary, themorphological processor 40 can perform the functions of addition, multiplication, multiplication with a constant, summation of a line, finding the pixel-by-pixel minimum and maximum for two images, prime generation, and also histogram calculation. The results of themorphological processor 40 are sent along thedata buses 42 and stored in theimage memory modules 38. The ALU 72 can be a standard 181 type, e.g. Texas Instruments part #ALS181. Themultiplier accumulator 74 can be of conventional design, such as Weitech WTL2245.
Referring to FIG. 13, there is shown thegraphic controller processor 36, in schematic block diagram form. The function of thegraphic controller 36 is to receive processed digitized video images from thememory modules 38, graphic data, and alphanumeric data and combine them for output. The data from thecontrol bus 32 is supplied to anAdvanced CRT controller 84. The CRT controller is a part made by Hitachi, part number HD 63484 The output of theadvance CRT controller 84 controls aframe buffer 80. Stored within theframe buffer 80 are the graphics and alphanumeric data. The video images from thedata buses 42A-42F are also supplied to thegraphics controller processor 36. One of thedata buses 42 is selected and that combined with the output of theframe buffer 80 is supplied to a look-up table 82. The output of look-up table 82 is then supplied as the output to one of thedata buses 42G, 42H or 42I. The function of the graphics controlprocessor 36 is to overlay video alpha and graphics information and then through a D to A toconverter 86 is supplied to themonitor 26. In addition, the digital overlayed image can also be stored in one of theimage memory modules 38.
The image which is received by the graphics controlprocessor 36 from one of theimage memory modules 38 is through one of thedata buses 42A-42F. The control signals along thecontrol bus 32 specifies to theimage memory module 36 the starting address, the x and y offset with regard to vertical sync as to when the data from the image memory within thatmemory module 38 is to be outputted onto thedata buses 42A-42F. Thus, split screen images can be displayed on thedisplay monitor 26.
Referring to FIG. 14, there is shown a schematic block diagram of themaster controller 30. Themaster controller 30, as previously stated, communicates with thehost computer 22 via a Q-bus. Themaster controller 30 receives address and data information from thehost computer 22 and produces a 64 bit microcode. The 64 bit microcode can be from the writable control store (WCS) location of thehost computer 22 and is stored inWCS 90 or it can be from theproxy prom 92. The control program within theproxy prom 92 is used upon power up asWCS 90 contains volatile RAM. The 64 bit microcode is processed by the 29116ALU 94 of themaster controller 30. Themaster controller 30 is of the Harvard architecture in that separate memory exists for instruction as well as for data. Thus, theprocessor 94 can get instruction and data simultaneously. In addition, themaster controller 30 comprises abackground sequencer 96 and aforeground sequencer 98 to sequence series of program instruction stored in thewritable control storage 90 or theproxy prom 92. The Q-bus memory map from which themaster controller 30 receives its writable control store and its program memory is shown as below:
______________________________________                                    ADDRESS (HEXADECIMAL) Use                                                 ______________________________________                                           3FFFFF             BS7 (Block 7                                                              conventional Digital                                                      Equipment Corp.                                        3FE000             nonemclature)                                           3FDFFF                                                                                      Scratch Pad                                            3FA000                                                                     387FFF             Writable Control                                      380000             Store                                                   37FFFF             Image Memory                                          280000             Window1FFFFF             Host Computer                                         0                  Program Memory                                  ______________________________________
In addition, the control signals ADAV, CMD and WRT have the following uses.
______________________________________                                    CONTROL SIGNALS                                                           ADAV     CMD     WRT        Use                                           ______________________________________                                    0X       X          Quiescent Bus                                 1        1       0Read Register                                 1        1       1Write Register                                1        0       0Read Image Memory                             1        0       1          Write Image Memory                            ______________________________________
Themaster controller 30 operates synchronously with each one of themodules 34, 36, 38 and 40 and asynchronously with thehost computer 22. The clock signal is generated by themaster controller 30 and is sent to every one of themodules 34, 36, 38 and 40. In addition, themaster controller 30 starts the operation of the entire sequence of video image processing and video image storing upon the start of vertical sync. Thus, one of the signals to each of thelogic units 52 is a vertical sync signal. In addition, horizontal sync signals may be supplied to each one of the logic units.
The logic units may also contain logic memory elements that switch their respective tri-state transceivers at prescribed times with respect to the horizontal sync and the vertical sync signals. Referring to FIG. 15, there is shown a schematic diagram of another embodiment of alogic unit 252. Thelogic unit 252 is connected to a firstaddress decode circuit 250 and a secondaddress decode circuit 251. Thelogic unit 252 comprises a first ANDgate 254, a second ANDgate 256, acounter 258 and avertical sync register 260.
Prior to the operation of thelogic unit 252, firstaddress decode circuit 250 is activated loading the data from the data lines of thecontrol bus 32 into thecounter 258.
Thereafter, when the secondaddress decode circuit 251 is activated, and vertical sync signal is received, thecounter 258 counts down from each clock pulse received. When thecounter 258 reaches zero, thetri-state registers 64a and 64b are activated.
It should be emphasized that themaster controller 30, each one of theprocessing modules 34, 36, 38 and 40 and each one of theimage memory modules 38 can be of conventional design. Themaster controller 30 controls the operation of each one of the modules along aseparate control bus 32. Further, each of the modules communicates with one another by a plurality ofdata buses 42. The interconnection of each one of the modules (34-40) with one or more of thedata buses 42 is accomplished by means within the module (34-40) which is controlled by the control signals along thecontrol bus 32. The interconnection of thedata buses 42 to the electronic function within each of the modules is as previously described. However, the electronic function within each of the modules, such as memory storage or processing can be of conventional architecture and design.
In theapparatus 8 of the present invention, an image of the field of view as seen through themicroscope 16 is captured by thecolor camera 12. Thecolor camera 12 converts the image in the field of view into an electrical image of the field of view. In reality, three electrical images are converted. The electrical images from thecolor camera 12 are processed by theimage processor 10 to form a plurality of different representations of the electrical image. Each different representation is a representation of a different parameter of the field of view. One representation is the area of interest. Another representation is the integrated optical density.
Referring to FIG. 16 there is shown an example of the digitized electrical signal representative of the electrical image of the field of view. The digitized image shown in FIG. 16 is the result of the output of thevideo processor module 34, which segments and digitizes the analog signal from thecolor camera 12. (For the purpose of this discussion, only one electrical image of the field of view will be discussed. However, it is readily understood that there are three video images, one for each color component of the field of view.) As shown in FIG. 16, each pixel point has a certain amplitude representing the greyscale value. The object in the field of view is located within the area identified by theline 200.Line 200 encloses the object in the field of view.
As previously stated, theimage processor 10 and more particularly themorphological processor module 40 processes the digitized video image to form a plurality of different processed digitized video images, with each different processed digitized video image being a different representation of the digitized video image.
One representation of the electrical image shown in FIG. 16 is shown in FIG. 17. This is the representation which represents a Green's function kernel for the area of the image in the field of view. In this representation, a number is assigned to each pixel location with the numbers being sequentially numbered started from left to right. While FIG. 17 shows pixel at the location X=0, Y=0 (as shown in FIG. 16) as being replaced by thenumber 1 and the number being sequential therefrom, any other number can also be used. In addition, the number assigned to the beginning pixel in each line can be any number--so long as each successive pixel, in the same line, differs from the preceding pixel by thenumber 1.
Another representation of the electrical image of the example shown in FIG. 16 is a Green's function kernel for the integrated optical density of the image in the field of view as shown in FIG. 18. In this representation, each pixel location P(Xm,Yn) is assigned a number which is calculated in accordance as follows: ##EQU1## As previously discussed, themorphological processor 40 is capable of calculating a Green's function kernel for the integrated optical density "on the fly" or in "realtime".
Thevideo image processor 10 also receives the electrical image from thecolor camera 12 and generates positional information that represents the boundary of the object contained in the field of view. One method of calculating the positional information that represents the boundary of the object view is disclosed in U.S. Pat. No. 4,538,299, which is incorporated by reference. As disclosed in the '299 patent, the digitized greyscale value (e.g. the image in FIG. 16) is compared to a pre-set threshold value such that as a result of the comparison, if the greyscale at the pixel location of interest exceeds the pre-set threshold value, then the value "1" is assigned to that pixel location. At all other locations if the greyscale value of the pixel of interest is below the pre-set threshold value, then a "0" is assigned at that location. As a result, the digital video image is converted to a representation where a value of "1" is assigned where there is an object and a value of "0" is assigned at locations which is outside the boundary of the object. An example of the conversion of the image shown in FIG. 16 by this method is the representation shown in FIG. 19.
Thereafterwards, and in accordance with the '299 patent, the representation, as shown in FIG. 19, is converted to a third representation by assigning a value to a pixel with-a location (X,Y) in accordance with
P(X,Y)=a*2.sup.7 +b*2.sup.6 +c*2.sup.5 +d*2.sup.4 +e*2.sup.3 +f*2.sup.2 +g*2+h
where a,b,c,d,e,f,g,h are the values of the eight nearest neighbors'surrounding pixel (X,Y) in accordance with
______________________________________                                    g              d         h                                                c              pixel (X,Y)                                                                         a                                                f              b         e                                                ______________________________________
This can be done by theprime generator 80 portion of themorphological processor 40.
Finally, in accordance with the '299 patent, this third representation is scanned until a first non-zero P(X,Y) value is reached. The P(X,Y) value is compared along with an input direction value to a look-up table to determine the next location of the non-zero value of P(X,Y) and forming a chaining code. In accordance with the teaching of the '299 patent, a positional information showing the location of the next pixel which is on the boundary of the object in the field of view is then generated. This positional information takes the form of Delta X=+1, 0, or -1 and Delta Y=+1, 0, or -1.
This generated positional information is also supplied to trace the locations in each of the other different representations.
For example, if the first value of the boundary scanned out is X=4, Y=1 (as shown in FIG. 19) that positional information is supplied to mark the locations in the representations shown in FIGS. 17 and 18, thereby marking the start of the boundary of the object in those representations. Thus, in FIG. 17, the pixel location, having thevalue 13, is initially chosen. In FIG. 18, the pixel location having thevalue 44 is initially chosen.
In accordance with the teaching of the '299 patent, the next positional information generated which denotes the next pixel location that is on the boundary of the object in the field of view would be Delta X=+1, Delta Y=+1. This would bring the trace to the location X=5, Y=2. That positional information is also supplied to the representations for the area, shown in FIG. 17 and to the representation denoting the integrated optical density, as shown in FIG. 18. The trace caused by the positional information would cause the representation in FIG. 17 to move the pixel location X=5, Y=2 or where the pixel has thevalue 22. Similarly, in FIG. 18, the trace would cause the pixel location X=5, Y=2 or the pixel having thevalue 76 to be traced. As the boundary of the object is traced, the same positional information is supplied to other representations denoting other parameters of the images of the field of view--which inherently do not have information or the boundary of the object in the field of view.
It should be emphasized that although the method and the apparatus heretofore describes the positional information as being supplied by the teaching disclosed in the '299 patent, the present invention is not necessarily limited to positional information based upon the '299 patent teaching. In fact, any source of positional information can be used with the method and apparatus of the present invention, so long as that information denotes the position of the boundary of the object in the field of view.
As the boundary of the object in the field of view is traced out in each of the different representations, that represent the different parameters of the object in the field of view, the different parameters are calculated. Further, the calculation of certain parameters may not depend on the boundary of the object. They may, instead, depend upon the position being inside the boundary of the object. Thus, once the boundary of the object in the field of view is determined, either positions on the boundary or positions interior to the boundary are traced out in identical corresponding positions in each of the different respresentations.
For example, to calculate the area of the object in the field of view, one takes the positional information and determine the value of the pixel at that location. Thus, the first pixel value would have thevalue 13. Except for the first pixel, the location of the current pixel (Xm,Yn) is compared to the location of the previously traced pixel (Xj,Yk) such that if Yn is less than Yk then the present value at Pi (Xm,Yn) is added to the value A. If Yn is greater than Yk then the present value at Pi (Xm -1,Yn) is added to B. B is subtracted from A to derive the area of the object of view. The calculation is shown in FIG. 20.
Similarly, for the calculation of the integrated optical density, if the present pixel location (Xm,Yn) compared to the previously traced pixel location (Xj,Yk) is such that Yn is less than Yk, then Pi (Xm,Yn) is added to A. If Yn is greater than Yk then Pi (Xm -1,Yn) is added to B. B is subtracted from A to derive the integrated optical density of the object. This is shown in FIG. 21.
There are many advantages to the method and apparatus of the present invention. First and foremost is that as the positional information regarding the boundary of an object of view is provided, multiple parameters of that object can be calculated based upon different representations of the image of the field of view containing the object--all of which representations do not inherently contain any positional information regarding the location of the boundary of the object in the field of view. Further, with the video image processor described, such different parameters can be calculated simultaneously, thereby greatly increasing image processing throughput.

Claims (22)

What is claimed is:
1. A method for simultaneously generating a plurality of parameters of an object in a field of view, said method comprising the steps of:
(a) forming an electrical image of said field of view;
(b) processing said electrical image to form a plurality of different representations of said electrical image; wherein each different representation is a Green's function representation of a different parameter of said same field of view;
(c) generating positional information that represents the boundary of said object;
(d) simultaneously tracing identical corresponding locations in each of said different Green's function representations of said same field of view in response to the positional information generated; and
(e) calculating the different parameters from each of said different Green's function representations based upon locations traced in each of said different Green's function representations.
2. The method of claim 1 wherein said locations traced in step (d) correspond to positions along the boundary of said object.
3. The method of claim 1 wherein said locations traced in step (d) correspond to positions interior to the boundary of said object.
4. The method of claim 1 where said step (b) further comprises the steps of:
(b)(1) segmenting said electrical image into a plurality of pixels and digitizing the image intensity of each pixel into an electrical signal representing the greyscale value;
(b)(2) processing said electrical signals to form a plurality of different Green's function representations of said electrical image; wherein each different Green's function representation is a representation of a different parameter of said field of view.
5. The method of claim 4 wherein one of said plurality of parameters of said object is the area of said object.
6. The method of claim 5 wherein said step (b) (2) further comprises the steps of:
assigning a number to each pixel location with said numbers being sequential starting from left to right for the representation that is a representation of the area of said field of view.
7. The method of claim 6 wherein said step (e) further comprises the steps of:
(i) if present pixel location (Xm, Yn) compared to previously traced pixel location
(Xj, Yk) is such that
Yn <Yk, then adding present pixel value Pi (Xm, Yn) to A;
Yn >Yk, then adding present pixel value Pi (Xm -1, Yi) to B
(ii) subtracting B from A to derive the area of said object.
8. The method of claim 4 wherein one of said plurality of parameters of said object is the integrated optical density of said object.
9. The method of claim 8 wherein said step (b)(2) further comprises the steps of:
assigning a number to each pixel location (Xm, Yn) with said number calculated as follows ##EQU2## where P(Xi,Yn) is the grey scale value at the pixel location of (Xi, Yn) ,
for the representation that is a representation of the integrated optical density of said field of view.
10. The method of claim 9 wherein said step (e) further comprises the steps of:
(i) if present pixel location (Xm,Yn) compared to the previously traced pixel location (Xj,Yk) is such that
Yn <Yk, then adding present pixel value Pi (Xm,Yn) to A
Yn >Yk, then adding present pixel value Pi (Xm -1, Yn) to B
(ii) subtracting B from A to derive the integrated optical density of said object.
11. The method of claim 1 wherein said step (c) further comprises the steps of:
(c)(1) segmenting said electrical signal into a plurality of pixels and digitizing the image intensity of each pixel into an electrical signal representing the greyscale value to form a first representation of said image;
(c)(2) processing the electrical signal of each of said greyscale value of said first representation to form a second representation of said image by comparing the greyscale value of each pixel to a pre-set threshold value such that as a result a "0" is assigned at each pixel location which is outside the boundary of said object and a "1" is assigned everywhere else;
(c)(3) converting said second representation into a third representation by assigning a value to a pixel (x,y) in accordance with
P(X,Y)=a*2.sup.7 +b*2.sup.6 +c*2.sup.5 +d*2.sup.4 +e*2.sup.3 +f*2.sup.2 +g*2+h
where a,b,c,d,e,f,g,h are the values of the eight nearest neighbors surrounding pixel (X,Y) in accordance with
______________________________________                                    g              d         h                                                c              pixel (X,Y)                                                                         a                                                f              b         e                                                ______________________________________
12. The method of claim 11 wherein said step (d) further comprises the steps of:
scanning said third representation until a first non-zero P(X,Y) value is reached;
comparing said P(X,Y) value and an input direction value to a look-up table to determine the next location of the non-zero value of P(X,Y) and forming a chaining code.
13. An apparatus for generating a plurality of parameters of an object in a field of view, said apparatus comprising:
imaging means for forming an electrical image of said field of view;
means for processing said electrical image to form a plurality of different representations of said electrical image; wherein each different representation is a Green's function representation of a different parameter of said same field of view;
means for generating positional information that represent the boundary of said object;
means for simultaneously tracing identical corresponding locations in each of said different Green's function representations of said same field of view in response to the positional information generated; and
means for calculating the different parameters from each of said different Green's function representations based upon locations traced in each of said different Green's function representations.
14. The apparatus of claim 13 wherein said processing means further comprises:
means for segmenting said electrical image into a plurality of pixels and digitizing the image intensity of each pixel into an electrical signal representing the greyscale value;
means for processing said electrical signals to form a plurality of different representations of said electrical image; wherein each different representation is a Green's representation representation of a different parameter of said field of view.
15. The apparatus of claim 14 wherein one of said plurality of parameters of said object is the area of said object.
16. The apparatus of claim 15 wherein said processing means further comprises:
means for assigning a number to each pixel location with said numbers being sequential starting from left to right for the representation that is a representation of the area of said field of view.
17. The apparatus of claim 16 wherein said calculating means further comprises:
means for adding the value of the present pixel location Pi (Xm,Yn) to A if Yn <Yk and Pi (Xm -1, Yi) to B if Yn >Yk where Yk is the Y component of (Xj, Yk), the location of the immediately preceding pixel that was traced; and
means for subtracting B from A to derive the area of said object.
18. The apparatus of claim 14 wherein one of said plurality of parameters of said object is the integrated optical density of said object.
19. The apparatus of claim 18 wherein said processing means further comprises:
means for assigning a number to each pixel location (Xm,Yn) with said number calculated as follows: ##EQU3## where P(Xi,Yn) is the greyscale value at the pixel location of (Xi, Yn).
20. The apparatus of claim 19 wherein said calculating means further comprises:
means for adding the value of the present pixel location Pi (Xm,Yn) to A if Yn <Yk and Pi (Xm -1, Yn) to B if Yn >Yk where Yk is the Y component of (Xj,Yk), the location of the immediately preceding pixel that was traced; and
means for subtracting B from A to derive the integrated optical density of said object.
21. The apparatus of claim 13 wherein said generating means further comprises:
means for forming a first representation of said image by segmenting said image into a plurality of pixels and digitizing the image intensity of each pixel into an electrical signal representing the greyscale value;
means for processing the electrical signal of each of said greyscale value to form a second representation of said image;
logic means for converting said second representation into a third representation whereby the value of a pixel at a location (hereinafter: pixel (X,Y)) in the second representation and the values of the nearest adjacent neighbors of said pixel at said location are converted into a single value at said corresponding location (hereinafter: P(X,Y)) in said third representation;
storage means for storing said third representation; and
table means for storing various possible values of P(X,Y), said table means for receiving a value of P(X,Y) and an input direction value, and for producing an output direction value to indicate the next location of P(X,Y) having a non-zero value; said non-zero values of P(X,Y) form the boundary of said object.
22. The apparatus of claim 21 wherein said logic means is adapted to convert said second representation in accordance with the following rules:
(1) If pixel (X,Y)=0, then assign 0 to P(X,Y);
(2) If pixel (X,Y)=1 and all eight nearest neighbors of pixel (X,Y)=0, then assign 0 to P(X,Y);
(3) If pixel (X,Y)=1 and all four nearest neighbors of pixel (X,Y)=1, then assign 0 to P(X,Y);
(4) Otherwise assign a non-zero value to P(X,Y) wherein said value assigned to P(X,Y) is a number composed of the values of the eight nearest neighbors of pixel (X,Y).
US07/822,1901987-08-141992-02-18Method and apparatus for generating a plurality of parameters of an object in a field of viewExpired - LifetimeUS5432865A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US07/822,190US5432865A (en)1987-08-141992-02-18Method and apparatus for generating a plurality of parameters of an object in a field of view

Applications Claiming Priority (3)

Application NumberPriority DateFiling DateTitle
US8598587A1987-08-141987-08-14
US07/350,400US5121436A (en)1987-08-141989-05-11Method and apparatus for generating a plurality of parameters of an object in a field of view
US07/822,190US5432865A (en)1987-08-141992-02-18Method and apparatus for generating a plurality of parameters of an object in a field of view

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US07/350,400ContinuationUS5121436A (en)1987-08-141989-05-11Method and apparatus for generating a plurality of parameters of an object in a field of view

Publications (1)

Publication NumberPublication Date
US5432865Atrue US5432865A (en)1995-07-11

Family

ID=26773303

Family Applications (2)

Application NumberTitlePriority DateFiling Date
US07/350,400Expired - LifetimeUS5121436A (en)1987-08-141989-05-11Method and apparatus for generating a plurality of parameters of an object in a field of view
US07/822,190Expired - LifetimeUS5432865A (en)1987-08-141992-02-18Method and apparatus for generating a plurality of parameters of an object in a field of view

Family Applications Before (1)

Application NumberTitlePriority DateFiling Date
US07/350,400Expired - LifetimeUS5121436A (en)1987-08-141989-05-11Method and apparatus for generating a plurality of parameters of an object in a field of view

Country Status (1)

CountryLink
US (2)US5121436A (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO1996027846A1 (en)*1995-03-031996-09-12Arch Development CorporationMethod and system for the detection of lesions in medical images
US5732150A (en)*1995-09-191998-03-24Ihc Health Services, Inc.Method and system for multiple wavelength microscopy image analysis
US5768412A (en)*1994-09-191998-06-16Hitachi, Ltd.Region segmentation method for particle images and apparatus thereof
US5825936A (en)*1994-09-221998-10-20University Of South FloridaImage analyzing device using adaptive criteria
US5854834A (en)*1995-04-211998-12-29Mci Communications CorporationNetwork information concentrator
WO1999005638A1 (en)*1997-07-251999-02-04International Regenrative Medicine, Inc.A quantitative visual system for comparing parameters which characterize multiple complex entities
US5936731A (en)*1991-02-221999-08-10Applied Spectral Imaging Ltd.Method for simultaneous detection of multiple fluorophores for in situ hybridization and chromosome painting
US6144758A (en)*1995-01-092000-11-07Fuji Photo Film Co., Ltd.Biochemical image analyzing apparatus
US6236744B1 (en)*1994-04-152001-05-22Fuji Photo Film Co., Ltd.Image forming apparatus using synthesized image and graphic data to display a portion of an image surrounded by a graphic pattern
US6256405B1 (en)*1994-04-152001-07-03Fuji Photo Film Co., Ltd.Image forming apparatus
US20020136436A1 (en)*2001-02-232002-09-26Schrier Wayne H.Devices and methods for reading and interpreting guaiac-based occult blood tests
US20030091221A1 (en)*2001-09-192003-05-15Tripath Imaging, Inc.Method for quantitative video-microscopy and associated system and computer software program product
US6697506B1 (en)*1999-03-172004-02-24Siemens Corporate Research, Inc.Mark-free computer-assisted diagnosis method and system for assisting diagnosis of abnormalities in digital medical images using diagnosis based image enhancement
US6707604B2 (en)*2001-04-052004-03-16Nikon CorporationConfocal microscope system and controller thereof
US6804385B2 (en)2000-10-242004-10-12OncosisMethod and device for selectively targeting cells within a three-dimensional specimen
US20050202558A1 (en)*2004-03-152005-09-15Koller Manfred R.Methods for purification of cells based on product secretion
US20070269875A1 (en)*2003-10-312007-11-22Koller Manfred RMethod and apparatus for cell permeabilization
US20080046059A1 (en)*2006-08-042008-02-21Zarembo Paul ELead including a heat fused or formed lead body
US20090185267A1 (en)*2005-09-222009-07-23Nikon CorporationMicroscope and virtual slide forming system
US20100007727A1 (en)*2003-04-102010-01-14Torre-Bueno Jose De LaAutomated measurement of concentration and/or amount in a biological sample
US20100021077A1 (en)*2005-09-132010-01-28Roscoe AtkinsonImage quality
US20100054574A1 (en)*2002-01-242010-03-04Tripath Imaging, Inc.Method for quantitative video-microscopy and associated system and computer software program product
US20100179310A1 (en)*2009-01-092010-07-15Cyntellect, Inc.Genetic analysis of cells
US20110201075A1 (en)*1997-03-272011-08-18Cyntellect, Inc.Optoinjection methods
US8401263B2 (en)1997-03-272013-03-19Intrexon CorporationMethod and apparatus for selectively targeting specific cells within a cell population
US8788213B2 (en)2009-01-122014-07-22Intrexon CorporationLaser mediated sectioning and transfer of cell colonies

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5556764A (en)*1993-02-171996-09-17Biometric Imaging, Inc.Method and apparatus for cell counting and cell classification
US5754688A (en)*1993-06-041998-05-19Eli Lilly And CompanyMethod and apparatus for stereologic analysis of two-dimensional images
US5848177A (en)*1994-12-291998-12-08Board Of Trustees Operating Michigan State UniversityMethod and system for detection of biological materials using fractal dimensions
US5699794A (en)*1995-12-191997-12-23Neopath, Inc.Apparatus for automated urine sediment sample handling
WO1998055026A1 (en)*1997-06-051998-12-10Kairos Scientific Inc.Calibration of fluorescence resonance energy transfer in microscopy
US6389149B1 (en)*1997-12-312002-05-14Intel CorporationMethod and apparatus to improve video processing in a computer system or the like
DE10053202A1 (en)*2000-10-262002-05-16Gsf Forschungszentrum Umwelt Method for image acquisition of samples and optical viewing device for carrying out the method
US20040202357A1 (en)2003-04-112004-10-14Perz Cynthia B.Silhouette image acquisition
US9239281B2 (en)*2008-04-072016-01-19Hitachi High-Technologies CorporationMethod and device for dividing area of image of particle in urine

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4060713A (en)*1971-06-231977-11-29The Perkin-Elmer CorporationAnalysis of images
US4097845A (en)*1976-11-011978-06-27Rush-Presbyterian-St. Luke's Medical CenterMethod of and an apparatus for automatic classification of red blood cells
US4538299A (en)*1981-12-041985-08-27International Remote Imaging Systems, Inc.Method and apparatus for locating the boundary of an object
US4550437A (en)*1981-06-191985-10-29Hitachi, Ltd.Apparatus for parallel processing of local image data
US5086476A (en)*1985-11-041992-02-04Cell Analysis Systems, Inc.Method and apparatus for determining a proliferation index of a cell sample
US5099521A (en)*1988-03-241992-03-24Toa Medical Electronics Co., Ltd.Cell image processing method and apparatus therefor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4453266A (en)*1980-04-211984-06-05Rush-Presbyterian-St. Luke's Medical CenterMethod and apparatus for measuring mean cell volume of red blood cells

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4060713A (en)*1971-06-231977-11-29The Perkin-Elmer CorporationAnalysis of images
US4097845A (en)*1976-11-011978-06-27Rush-Presbyterian-St. Luke's Medical CenterMethod of and an apparatus for automatic classification of red blood cells
US4550437A (en)*1981-06-191985-10-29Hitachi, Ltd.Apparatus for parallel processing of local image data
US4538299A (en)*1981-12-041985-08-27International Remote Imaging Systems, Inc.Method and apparatus for locating the boundary of an object
US5086476A (en)*1985-11-041992-02-04Cell Analysis Systems, Inc.Method and apparatus for determining a proliferation index of a cell sample
US5099521A (en)*1988-03-241992-03-24Toa Medical Electronics Co., Ltd.Cell image processing method and apparatus therefor

Cited By (40)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5936731A (en)*1991-02-221999-08-10Applied Spectral Imaging Ltd.Method for simultaneous detection of multiple fluorophores for in situ hybridization and chromosome painting
US6236744B1 (en)*1994-04-152001-05-22Fuji Photo Film Co., Ltd.Image forming apparatus using synthesized image and graphic data to display a portion of an image surrounded by a graphic pattern
US6256405B1 (en)*1994-04-152001-07-03Fuji Photo Film Co., Ltd.Image forming apparatus
US5768412A (en)*1994-09-191998-06-16Hitachi, Ltd.Region segmentation method for particle images and apparatus thereof
US5825936A (en)*1994-09-221998-10-20University Of South FloridaImage analyzing device using adaptive criteria
US6144758A (en)*1995-01-092000-11-07Fuji Photo Film Co., Ltd.Biochemical image analyzing apparatus
WO1996027846A1 (en)*1995-03-031996-09-12Arch Development CorporationMethod and system for the detection of lesions in medical images
US6185320B1 (en)*1995-03-032001-02-06Arch Development CorporationMethod and system for detection of lesions in medical images
US5854834A (en)*1995-04-211998-12-29Mci Communications CorporationNetwork information concentrator
US5732150A (en)*1995-09-191998-03-24Ihc Health Services, Inc.Method and system for multiple wavelength microscopy image analysis
US8401263B2 (en)1997-03-272013-03-19Intrexon CorporationMethod and apparatus for selectively targeting specific cells within a cell population
US20110201075A1 (en)*1997-03-272011-08-18Cyntellect, Inc.Optoinjection methods
WO1999005638A1 (en)*1997-07-251999-02-04International Regenrative Medicine, Inc.A quantitative visual system for comparing parameters which characterize multiple complex entities
US6697506B1 (en)*1999-03-172004-02-24Siemens Corporate Research, Inc.Mark-free computer-assisted diagnosis method and system for assisting diagnosis of abnormalities in digital medical images using diagnosis based image enhancement
US8218840B2 (en)2000-10-242012-07-10Intrexon CorporationMethod and device for selectively targeting cells within a three-dimensional specimen
US6804385B2 (en)2000-10-242004-10-12OncosisMethod and device for selectively targeting cells within a three-dimensional specimen
US20050047640A1 (en)*2000-10-242005-03-03Oncosis LlcMethod and device for selectively targeting cells within a three-dimensional specimen
US7092557B2 (en)2000-10-242006-08-15Cyntellect, Inc.Method and device for selectively targeting cells within a three-dimensional specimen
US6850633B2 (en)*2001-02-232005-02-01Beckman Coulter, Inc.Devices and methods for reading and interpreting guaiac-based occult blood tests
US20020136436A1 (en)*2001-02-232002-09-26Schrier Wayne H.Devices and methods for reading and interpreting guaiac-based occult blood tests
US6707604B2 (en)*2001-04-052004-03-16Nikon CorporationConfocal microscope system and controller thereof
US7065236B2 (en)2001-09-192006-06-20Tripath Imaging, Inc.Method for quantitative video-microscopy and associated system and computer software program product
WO2003025554A3 (en)*2001-09-192003-08-21Tripath Imaging IncMethod quantitative video-microscopy and associated system and computer software program product
US20030091221A1 (en)*2001-09-192003-05-15Tripath Imaging, Inc.Method for quantitative video-microscopy and associated system and computer software program product
US20100054574A1 (en)*2002-01-242010-03-04Tripath Imaging, Inc.Method for quantitative video-microscopy and associated system and computer software program product
US7826650B2 (en)2002-01-242010-11-02Tripath Imaging, Inc.Method for quantitative video-microscopy and associated system and computer software program product
US8712118B2 (en)2003-04-102014-04-29Carl Zeiss Microimaging GmbhAutomated measurement of concentration and/or amount in a biological sample
US20100007727A1 (en)*2003-04-102010-01-14Torre-Bueno Jose De LaAutomated measurement of concentration and/or amount in a biological sample
US20070269875A1 (en)*2003-10-312007-11-22Koller Manfred RMethod and apparatus for cell permeabilization
US7622274B2 (en)2004-03-152009-11-24Cyntellect, Inc.Method for determining a product secretion profile of cells
US20050202558A1 (en)*2004-03-152005-09-15Koller Manfred R.Methods for purification of cells based on product secretion
US7425426B2 (en)2004-03-152008-09-16Cyntellect, Inc.Methods for purification of cells based on product secretion
US8236521B2 (en)2004-03-152012-08-07Intrexon CorporationMethods for isolating cells based on product secretion
US20080014606A1 (en)*2004-03-152008-01-17Cyntellect, Inc.Methods for purification of cells based on product secretion
US20100021077A1 (en)*2005-09-132010-01-28Roscoe AtkinsonImage quality
US8817040B2 (en)2005-09-132014-08-26Carl Zeiss Microscopy GmbhMethods for enhancing image quality
US20090185267A1 (en)*2005-09-222009-07-23Nikon CorporationMicroscope and virtual slide forming system
US20080046059A1 (en)*2006-08-042008-02-21Zarembo Paul ELead including a heat fused or formed lead body
US20100179310A1 (en)*2009-01-092010-07-15Cyntellect, Inc.Genetic analysis of cells
US8788213B2 (en)2009-01-122014-07-22Intrexon CorporationLaser mediated sectioning and transfer of cell colonies

Also Published As

Publication numberPublication date
US5121436A (en)1992-06-09

Similar Documents

PublicationPublication DateTitle
US5432865A (en)Method and apparatus for generating a plurality of parameters of an object in a field of view
US4229797A (en)Method and system for whole picture image processing
US5200818A (en)Video imaging system with interactive windowing capability
EP0118053B1 (en)Image signal processor
US4924522A (en)Method and apparatus for displaying a high resolution image on a low resolution CRT
EP0199573A2 (en)Electronic mosaic imaging process
US4731864A (en)Photographic camera simulation systems working from computer memory
EP0149516A2 (en)Realtime digital diagnostic image processing system
EP0150910A2 (en)Digital image frame processor
GB2130857A (en)Graphics display system with viewports of arbitrary location and content
EP0332706A1 (en)High speed image fault detecting method and apparatus
US4641356A (en)Apparatus and method for implementing dilation and erosion transformations in grayscale image processing
EP1306810A1 (en)Triangle identification buffer
EP0306305A2 (en)Image processor with free flow pipeline bus
EP0294954B1 (en)Image processing method
EP0069542B1 (en)Data processing arrangement
US20030152267A1 (en)Automatic perception method and device
CN120235837A (en) A circuit board defect detection method and device based on improved YOLOv8
GB2208728A (en)Digital processing system with multi data buses
CA1328019C (en)Method and apparatus for generating a plurality of parameters of an object in a field of view
US4760607A (en)Apparatus and method for implementing transformations in grayscale image processing
JPS6261175A (en)Apparatus for analyzing connecting property of pixel
JPS61223894A (en) Gradation conversion control method
Preston JrCellular Logic Architectures
Fletcher et al.Vidibus: a low-cost, modular bus system for real-time video processing

Legal Events

DateCodeTitleDescription
FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCFInformation on status: patent grant

Free format text:PATENTED CASE

ASAssignment

Owner name:INTERNATIONAL REMOTE IMAGING SYSTEMS, INC., CALIFO

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL REMOTE IMAGING SYSTEMS, INC., A CALIFORNIA CORPORATION;REEL/FRAME:008085/0522

Effective date:19960813

ASAssignment

Owner name:CITY NATIONAL BANK, CALIFORNIA

Free format text:SECURITY INTEREST;ASSIGNOR:INTERNATIONAL REMOTE IMAGING SYSTEMS, INC.;REEL/FRAME:008376/0858

Effective date:19970103

ASAssignment

Owner name:FOOTHILL CAPITAL CORPORATION, CALIFORNIA

Free format text:SECURITY AGREEMENT;ASSIGNOR:INTERNATIONAL REMOTE IMAGING SYSTEMS, INC.;REEL/FRAME:009257/0255

Effective date:19980505

ASAssignment

Owner name:CITY NATIONAL BANK, CALIFORNIA

Free format text:RELEASE OF SECURITY AGREEMENT;ASSIGNOR:INTERNATIONAL REMOTE IMAGING SYSTEMS, INC.;REEL/FRAME:009214/0371

Effective date:19970103

FEPPFee payment procedure

Free format text:PAT HLDR NO LONGER CLAIMS SMALL ENT STAT AS SMALL BUSINESS (ORIGINAL EVENT CODE: LSM2); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text:PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

REMIMaintenance fee reminder mailed
FPAYFee payment

Year of fee payment:4

SULPSurcharge for late payment
FPAYFee payment

Year of fee payment:8

SULPSurcharge for late payment

Year of fee payment:7

FPAYFee payment

Year of fee payment:12


[8]ページ先頭

©2009-2025 Movatter.jp