Overall task is to detect and count cells of various types in an image
Navigation:
{Note: many of the demos used herein rely on the NIH ImageJ program (a java version of NIH-Image for use on virtuallyany computer system). This program and the source code can be downloaded for free from: the NIH ImageJ web site.}
UofR Campus Library | Axial CT image of a lung | Sagittal MRI image of the same lung |
UofR Campus Library | library image with motion blurring | image after blurring correction (top half) |
Radiation dose distribution around tumor | Dose profile across tumor center with and without [bold line] tumor motion |
Image processing books that I have used recently and that are available at the CSE library:
Our task in the next 2 weeks is to take the microscopy images you will later acquire and develop an algorithm to automatically detect and count the number and types of cells in your samples. For some of these images, detecting and counting the cells may not present an overly difficult task given the small number of cells and the fact that they are usually well-defined and separated from each other, but as we are developing our methods we should keep in mind the ultimate task of dealing with images containing perhaps hundreds of overlapping cells of various sizes and shapes. Also, your algorithm should be able to handle images that may contain much more noise or debris in your microscopy field.
When you look at this image and are asked to locate all the various cells, you immediately point to each individual cell, and you are almost always correct in your choice. But how does your brain do that? In our field, we are often asked to design tools and devices that are intended to mimic, or sometimes even to improve on, the function of the human body. This is one of those instances, and you'll find it helpful to consider some of the types of processes that your brain uses to perform object recognition. For instance, we should first ask ourselves: what features of the objects in the microscopy image make you identify them as being something that deserves your attention?
We can list several of these:
Second, what features of the objects make you identify them as an object of a particular class?
We should list several of these:
A major identifying attribute is pixel intensity constrast relative to the surroundings. It should not surprise you that your eyes and brain have the ability to perform background subtraction and edge detection capabilities very early on in the processing of visual information. First, before the visual signals even leave the retina, the visual cues coming from each rod cell are modified to accentuate image contrast. As you can read in the excerpts from the Textbook of Medical Physiology by Guyton, each rod cell has a central excitatory zone surrounded by an inhibitory zone. The effect of this bi-phasic response is to improve our ability to detect differences in image intensity from one rod cell to the next, and this also acts to reduce our attention to areas where the image intensity is constant across a large region. In other words, the retina performs the initial image processing steps of background subtraction and edge enhancement. The deeper parts of the brain then add to this enhancement and look for additional features such as object shape and motion across the visual field.
Guyton's physiology textbook:
The 'Dots' illusion:
Your eyes are performing background subtraction to create the illusion of dots between the squares. |
So how do we quantify the edges in the image in a manner that can be performed by a computer? Since the edge is defined a sharp change in the image intensity over a relatively few number of pixels, a simple method is to compute the slope of the intensity profile at each pixel. There are several methods for computing a slope and we should investigate several of these to discern which is most appropriate for our current application. A commonly-used approach is to compute the magnitude of the slope at the pixel of interest, as determined from consideration of pixels on one or both sides of pixel of interest. This is called computing the Gradient of the image, is usually done in 2D, and the computation of the gradient of the image is called performing a Gradient Operation on the image. In lab we will investigate further the properties of the image gradient operator.
For the first lab session, you should first familiarize yourself with the ImageJ program. ImageJ is meant to be an easy-to-use image processing tool. It is based on the program NIH Image that was written for and became popular on Macintosh computers. ImageJ is written in Java language by the authors of NIH Image and who still reside at the NIH. Because ImageJ is written in java it can be compiled and run on virtually any computer system. It has an official web set at: http://rsb.info.nih.gov/ij/ where you can look up the latest examples, documentation, and plug-ins. It is designed to be modular, meaning that individual users are encouraged to write their our small programs (plug-ins), and these can be automatically compiled and incorporated into the ImageJ program without much additional effort on the users' part. ImageJ is very similar to Scion Image in many respects. One difference is that (apparently) Scion image can handle only Tiff, BMP and Dicom image formats, while ImageJ handles virtually any image format (Tiff, Gif, JPG, BMP, Dicom, etc). Scion's drawing tools are better than ImageJ though. A very nice feature of ImageJ is that it is entirely free and open source -- meaning that you have full access to all its source code. This is a great help when trying to modify their code or to write your own code from scratch since you have many working examples to look at.
The Window and Level gizmo
The Window value assigns the spread in the gray-scale values represented in the
display. The Level sets the
middle of the range, i.e. the intensity value in the original image that will get the
middle gray-level value (128) in the
transformed image. To showcase this,
Gradient Operator:
A separate plug-in has been created to allow one to compute the gradient of an image using any
of a number of user-defined parameter settings [
ImageJ >Process>Gradient Analysis ]. The plug-in
allows you to compute
the unidirectional gradient in either the X-direction or the Y-direction, or to compute a
2D gradient, with magnitude defined as the sqrt(Gx2 + Gy2). For the unidirectional
gradient calculation, you can set whether to use the slope of the intensity directly,
or to use the magnitude (absolute value) of the slope. You are also able to set the
width (in pixels) of the region over which the slope is computed.
Segmentation of image objects is a critical issue in many medical imaging applications, such as in delineating and mapping different structures in the brain (caudate nucleus vs thalamus vs tumorous mass). Around 1990 a novel technique came into being wherein a mathematical construct for a piecewise curve in space was combined with a mechanical model of the stretching and bending forces of a spring to construct a deformable contour model, also known as active contours, and known in the business simply as snakes. The basic idea of a snake is that one desires a contour line to be able to respond actively to the information provided in the underlying image, but also to have some built-in constraints to prevent the contour line from being led into doing wacky, non-physiologic things, like contorting into a pretzel shape when we all know that the surface of the heart is relatively elliptical by nature. The ultimate goal would be for the user to be able to draw an initial, simply shaped contour near the image object of interest, and then have the contour automatically resize and reshape to match the image object.
First, the contour line is defined as a sequence of nodes connected by spring-like connections. The driving force behind the movement of the contour line to the image object is usually taken to be an attraction of each node toward the object boundaries, i.e. the magnitude of the gradient operator acting on the original image. Thus an energy term is associated with each node wherein a low energy state exists when a node is sitting on a bright spot in the gradient image -- the external energy term. The constraints to the contour shape are created by associating high energies whenever there is a sharp bend in the contour line and/or whenever two nodes become spaced either too far or too close from/to their neighbors - the internal energy term. The object for the snake then is to work toward the minimum energy state, that is, once in which the contour line lies on or very close to the image object at all nodes, yet retains a relatively smooth shape.
A snake is a 2D active contour; in 3D, the active contour is often called a balloon. As the name implies, the user typically draws a spherical 'balloon' entirely inside the object of interest, then the 'balloon' is allowed to 'inflate' and fill up the interior of the image object in all directions, until the object edges are meet. Often one has to tweek the relative strength of the external versus internal energy terms to optimize the snake to meet a particular segmentation need. If there is no underlying image feature information, then the snake will take on a circular shape (minimizes both binding and stretching). If there is no internal energy term, then the contour line will often take on a very irregular shape; responding to any noise or gap in the image feature information. I created an interactive snake gizmo in ImageJ for the purpose of segmenting lung contours in the diagnostic CT/MRI images. The gizmo allows the user to adjust the relative internal and external energy coefficients, and other parameters of interest, on-the-fly while observing the result on the screen. The program is clever enough so that once one slice has been completed, that snake geometry is used as the starting point for the next image slice, and so on. The program I wrote has some problems at times, but beats doing it all by hand.
Try the snake on the test image first (demos1/testImage.gif). To use the program, you must first open an image and draw a box-shaped (or oval-shaped) region of interest (ROI) around the object of interest, then invoke the procedure (ImageJ > Process > InterActivesnake). The incomplete disk at the top-left illustrates the utility of having internal energy terms to interpolate over gaps in the image data. The blob in the middle left-side is good test for convergence of the snake. Try the program also on one of the lung CT images (demos1/lungLesionCT/37.img) and try to segment one of the lungs.