- UA Collaborators
- Contact Us
Digital Imaging: Introduction
A digital image is usually a rectangular grid comprised of individual pixels (picture element or PEL). A good analogy might be a tile mosaic, with the smallest element in the mosaic being the individual tiles (each of which is one color or shade). Each pixel in a digital image has a bit-depth value, which informs the computer which color (or shade of gray)the pixel will display (the greater the bit-depth value, the more colors/grays to choose from). The combined effect of all the individually colored pixels creates the image.
Several introductory resources with information about Digital Images:
The number of pixels in an image is often used as a way to describe the image's resolution. The word resolution has a specific technical meaning to microscope users, namely the ability to distinguish between two closely adjacent objects at a given magnification. In the context of digital images, the word resolution usually refers to how frequently an object was sampled.
The same object sampled at four different pixel densities
(Note: If you can stand back approximately 3 m from your computer screen, the perceived difference between these images begins to decrease.)
Image resolution is often confused with the resolution of the output device (computer monitor or printer). Output devices typically express their resolution in dots/inch (DPI). Digital imaging software programs (e.g., Adobe Photoshop™) often set their scale factors based on the monitor resolution (72 DPI), however, this setting is really only useful for images that will ultimately be displayed on a monitor (WWW pages). Too often I have seen people unnecessarily reduce the size of their images based on the 72 DPI setting and in the process "throw away" pixels (down-sampling), going from a crisp image like the right-most image above, to the equivalent of the next image to the left. With scientific/microscopic images this is an unacceptable loss of data. Printers often refer to their maximum output resolution in dots per inch(e.g., laser printers range from 300-1200 DPI, inkjets 1440 DPI, dye-sublimation printers 300 DPI, etc.). Comparing these output resolution numbers could cause a user to come to some misleading conclusions. The safest thing to do is to "think pixels first". With scientific digital images there is almost always some type of internal scale, so that each pixel has a size value (e.g., the person in the above image is approximately 1.8 m tall, therefore each pixel describing the person in the far right image is roughly equal to 4.25 cm in the x and y dimensions). Digital images from microscopes should have a scale bar of known size added to them before any size changes are applied to the digital image. Size changes should be one of the last things done prior to printing a hard copy of an image. Try to avoid the common problem of down-sampling(throwing away pixels), which is often followed by up-sampling (interpolation or adding of pixels). More information on resolution/sampling in digital images:
The bit-depth of an image can greatly affect the size of the computer file, in this example the image size is assumed to be 1024 by 1024 pixels:
|Bit depth||Number of colors/shades||
As the chart indicates, bit-depth relates to the number of colors that can be displayed in the image. Images with only two colors are binary, the pixels are either black or white. Monitors and imaging hardware are typically limited to displaying grayscale images in 8-bit mode (256 shades of gray). Most monitors can only display color images in a maximum of 24-bit mode (true color), due to the limitations of the electronics in the cathode ray tube. Even these ranges are greater than the sensitivity of the human eye, which is often stated as only being able to detect 16-32 shades of gray and roughly 2,000,000 colors. More information on bit-depth:
The most commonly used color model is RGB (Red, Green, Blue) for on-screen color. RGB is an additive color model, the three different phosphors on the monitor screen are excited at different intensities (usually an 8-bit range for each color, for 256 intensities per color, for a total of 16.7 million possible color combinations) and based on the mix of the three intensities the eye perceives a color.
Color printing typically uses a subtractive color model called CYM (Cyan, Yellow, Magenta, Note: sometimes this will be referred to as CYMK due to the addition of blacK to allow for darker colors to be printed). The color inks combine on the white paper and act like a filter to absorb some wavelengths of light and reflect the remainder into the eye. Unfortunately, CYMK cannot reproduce as large a range of colors (referred to as a "gamut") as the RGB model. This can cause problems when trying to print an RGB image. More information on color (these links only begin to discuss some of the more technical aspects of color spaces and theory):
There are a large number of available file formats for storing digital images. The majority of the file formats are proprietary, and are specific to a given software program or specific uses (e.g., medical diagnostic imaging). Several well known file formats include:
- BMP - windows bitmap
- EPS - encapsulated postscript, this format is more useful for vector-based information than pixel-based information
- GIF - graphics exchange format, originally copyrighted by CompuServe, used on web pages, has a 256 color palette limitation, not suitable for most scientific images
- JPEG - joint photographic experts group, supports 24-bit color, uses a lossy compression technique (discrete cosine function), most often used on web pages, not suitable for most scientific images. The proposed JPEG-2000 format will use the lossless wavelet-based compression technique.
- PNG - portable network graphics, supports 48-bit color and 16 grayscale, lossless compression, a relatively new format that is not widely supported yet
- TIFF - tagged image file format, originally developed by Aldus Corp. (which was purchased by Adobe Systems) & Microsoft Corp.,supports paletted images (up to 8 bit), 8 & (in some programs) 16 bit grayscale as well as 24 bit color, this is probably the most commonly used file format for scientific images, supports lossless LZW compression (although not all programs can open compressed tiff files)
More information on file formats:
Two excellent resources on the physiology of human vision:
The human eye can be easily fooled:
What is an Illusion? (essay by JR. Block, Ph.D., at Sandlot Science)