Jump to content

Charge-coupled device

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Vwatts (talk | contribs) at 09:28, 8 May 2006 (→‎Competing technologies). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A specially developed CCD used for ultraviolet imaging in a wire bonded package.

A charge-coupled device (CCD) is an image sensor, consisting of an integrated circuit containing an array of linked, or coupled, capacitors sensitive to the light. Under the control of an external circuit, each capacitor can transfer its electric charge to one or other of its neighbours. CCDs are used in digital photography and astronomy (particularly in photometry, optical and UV spectroscopy and high speed techniques such as lucky imaging).

History

The CCD was invented in 1969 by Willard Boyle and George Smith at AT&T Bell Labs. The lab was working on the Picture-phone and on the development of semiconductor bubble memory. Merging these two initiatives, Boyle and Smith conceived of the design of what they termed 'Charge "Bubble" Devices'. The essence of the design was the ability to transfer charge along the surface of a semiconductor. As the CCD started its life as a memory device, one could only "inject" charge into the device at an input register. However, it was immediately clear that the CCD could receive charge via the photoelectric effect and electronic images could be created. By 1970 Bell researchers were able to capture images with simple linear devices; thus the CCD was born. Several companies, including Fairchild Semiconductor, RCA and Texas Instruments, picked up on the invention and began development programs. Fairchild was the first with commercial devices and by 1974 had a linear 500 element device and a 2-D 100 x 100 pixel device.

In January 2006, Boyle and Smith received the Charles Stark Draper Prize which is presented by the National Academy of Engineering for their work on the CCD.

Architecture

CCD image sensors can be implemented in several different architectures. The most common are full-frame, frame-transfer and interline. The distinguishing characteristic of each of these architectures is their approach the problem of shuttering.

In a full-frame device, all of the image area is active and there is no electronic shutter. A mechanical shutter must be added to this type of sensor or the image will smear as the device is clocked or read out.

With a frame transfer CCD, half of the silicon area is covered by an opaque mask (typically aluminum). The image can be quickly transferred from the image area to the opaque area or storage region with acceptable smear of a few percent. That image can then be read out slowly from the storage region while a new image is integrating or exposing in the active area. Frame-transfer devices typically do not require a mechanical shutter and were a common architecture for early solid-state broadcast cameras. The downside to the frame-transfer architecture is that it requires twice the silicon real estate of an equivalent full-frame device; hence, it costs roughly twice as much.

The interline architecture extends this concept one step further and masks every other column of the image sensor for storage. In this device, only one pixel shift has to occur to transfer from image area to storage area; thus, shutter times can be less than a microsecond and smear is essentially eliminated. The advantage is not free, however, as the imaging area is now covered by opaque strips dropping the "fill factor" to approximately 50% and the effective quantum efficiency by an equivalent amount. Modern designs have addressed this deleterious characteristic by adding microlenses on the surface of the device to direct light away from the opaque regions and on the active area. Microlenses can bring the fill factor back up to 90% or more depending on pixel size and the overall system's optical design.

The choice of architecture comes down to one of utility. If the application cannot tolerate an expensive, failure prone, power hungry mechanical shutter, then an interline device is the right choice. Consumer snap-shot cameras have used interline devices. On the other hand, for those applications that require the best possible light collection and issues of money, power and time are less important, the full-frame device will be the right choice. Astronomers tend to prefer full-frame devices. The frame-transfer falls in between and was a common choice before the fill-factor issue of interline devices was addressed. Today, the choice of frame-transfer is usually made when an interline architecture is not available, such as in a back-illuminated device.

Applications

CCDs containing grids of pixels are used in digital cameras, optical scanners and video cameras as light-sensing devices. They commonly respond to 70% of the incident light (meaning a quantum efficiency of about 70%) making them more efficient than photographic film, which captures only about 2% of the incident light. As a result CCDs were rapidly adopted by astronomers.

One-dimensional CCD from a fax machine.

An image is projected by a lens on the capacitor array, causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. A one-dimensional array, used in line-scan cameras, captures a single slice of the image, while a two-dimensional array, used in video and still cameras, captures the whole image or a rectangular portion of it. Once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbour. The last capacitor in the array dumps its charge into an amplifier that converts the charge into a voltage. By repeating this process, the control circuit converts the entire contents of the array to a varying voltage, which it samples, digitizes and stores in memory. Stored images can be transferred to a printer, storage device or video display. CCDs are also widely used as sensors for astronomical telescopes, and night vision devices.

An interesting astronomical application is to use a CCD to make a fixed telescope behave like a tracking telescope and follow the motion of the sky. The charges in the CCD are transferred and read in a direction parallel to the motion of the sky, and at the same speed. In this way, the telescope can image a larger region of the sky than its normal field of view.

CCDs are typically sensitive to infrared light, which allows infrared photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography. Because of their sensitivity to infrared, CCDs used in astronomy are usually cooled to liquid nitrogen temperatures, because infrared black body radiation is emitted from room-temperature sources. One other consequence of their sensitivity to infrared is that infrared from remote controls will often appear on CCD-based digital cameras or camcorders if they don't have infrared blockers. Cooling also reduces the array's dark current, improving the sensitivity of the CCD to low light intensities, even for ultraviolet and visible wavelengths.

Thermal noise, dark current, and cosmic rays may alter the pixels in the CCD array. To counter such effects, astronomers take an average of several exposures with the CCD shutter closed and opened. The average of images taken with the shutter closed is necessary to lower the random noise. Once developed, the "dark frame" average image is then subtracted from the open-shutter image to remove the dark current and other systematic defects in the CCD (dead pixels, hot pixels, etc).

CCD cameras used in astrophotography often require very sturdy mounts to cope with vibrations and breezes, along with the tremendous weight that most imaging platforms inherently cause. To take long CCD exposures of galaxies and nebulae, many astronomers use a technique known as auto-guiding. Most autoguiders use off-axis CCD chips to monitor any deviation from the imaging, however, some have the autoguider CCD and the imaging CCD in the same camera. Auto-guiders use a second CCD chip which can rapidly detect period errors in tracking and command the mount's motors to correct for them.

Color cameras

Digital color cameras generally use a Bayer mask over the CCD. Each square of four pixels has one filtered red, one blue, and two green (the human eye is more sensitive to green than either red or blue). The result of this is that luminance information is collected at every pixel, but the color resolution is lower than the luminance resolution.

Better color separation can be reached by three-CCD devices (3CCD) and a dichroic beam splitter prism, that splits the image into red, green and blue components. Each of the three CCDs is arranged to respond to a particular color. Some semi-professional digital video camcorders (and all professionals) use this technique.

Since a very-high-resolution CCD chip is very expensive as of 2005, a 3CCD high-resolution still camera would be beyond the price range even of many professional photographers. There are some high-end still cameras that use a rotating color filter to achieve both color-fidelity and high-resolution. These multi-shot cameras are rare and can only photograph objects that are not moving.

Competing technologies

Recently it has become practical to create an Active Pixel Sensor (APS) using the CMOS manufacturing process. Since this is the dominant technology for all chip-making, CMOS image sensors are cheap to make and signal conditioning circuitry can be incorporated into the same device. The latter advantage helps mitigate their greater susceptibility to noise, which is still an issue, though a diminishing one. This is due to the use of low grade amplifiers in each pixel instead of one high-grade amplifier for the entire array in the CCD. CMOS sensors also have the advantage of lower power consumption than CCDs. At present time there is however not a clear-cut winner of the competing technologies. CCDs still boast higher sensitivity, and higher dynamic range than CMOS sensors, and for these reasons CCDs are preferred in astronomical imaging where these factors are of prime importance.

References

  • W. S. Boyle and G. E. Smith, Bell Sys. Tech. J., 49, 587 (1970).
  • B. G. Streetman and S. K. Banerjee, (2006) "Solid State Electronic Devices", Prentice Hall, 6th Ed., Chapter 9.
  • R. S Muller and T. I. Kamins with M. Chan, (2002) "Device Electronics for Integrated Circuits", John Wiley and Sons, 3 rd Ed., Chapter 8.
  • James R. Janesick, (2001) "Scientific Charge-Coupled Devices", SPIE Press Monograph Vol. PM83

See also

CCD vendors