Photography has for many purposes been supplanted by techniques involving manipulation of photoelectrons, giving higher detective quantum efficiency (DQE) and potentially far greater accuracy than photography. These devices include photon-counting systems, based on TV or microchannel cameras, as well as CCD arrays. Any such linear detector has a digital output, proportional to the number of electrons liberated by photon interactions within each picture element (pixel).
The fact that the same detector is used over and over (unlike the one-shot use of photographic emulsion) means that very accurate calibrations are possible. A linear detector exposed for time t to radiation of intensity I has response at each point R = b + dt + cIt where b is sometimes known as the bias, d is the dark current, and c a calibration factor depending on the telescope collecting area and detector efficiency. Each of these constants may be a function of position (x,y). Various calibrations may be used to remove these purely instrumental terms with high accuracy, in the not unrealistic case that they vary slowly compared to the timescale of observation; many detectors have tightly controlled temperatures to insure just this condition.
The basic calibrations are as follows:
Sky flats and quality flat fielding are indispensable when using the infrared imagers. In these passbands, the sky always dominates the signal, so almost everything has smaller amplitude than the flat-field variations. Correction to 0.1% is required, so frequent chopping between object and blank sky, or object in two different places, is needed. As work in the visible band pushes fainter and fainter, this kind of observing procedure can be advantageous as well.
Additional imaging modes can be used with some detectors, beyond the point-and-shoot process long used in photography. Drift scanning is possible with CCDs or photon counters. With CCDs, the image is allowed to drift across the detector at the same rate charge packets are clocked along columns, so that each point on the sky is successively seen by each pixel in the column for an effective exposure equal to the single-frame readout time. This may be done at sidereal rate (i.e. the telescope not tracking, and in at least one case set on concrete blocks) to give a strip image stretching around the sky, or slowly, to extend the size of possible images and give higher cosmetic quality since flat-field irregularities largely cancel out. With photon counters, one records x,y,t for each detection, and uses the position of the telescope at t to transform into a single coordinate system on the sky. This has been used, for example, for the EUVE sky survey and the ROSAT X-ray survey, covering very wide areas without the need for precision attitude control.
This drift scanning is a special case of scan imaging, which can be as simple as using a single-element photocell to build up an image by scanning in a rectangular spiral or boustrophedonic pattern (lovely word, isn't it?). This is done by moving the whole telescope, by moving a single mirror (as in IR chopping and wobbling secondaries), or by some specific moving part in the instrument. A particularly interesting kind of scanning uses masks (or so-called coded masks) to admit only light from certain parts of the image at once. A matrix inversion of outputs against the mask yields the input image. This has found recent application for X-rays, giving a useful technique for improving angular resolution at energies where direct focussing is difficult. The mathematical details are on pages 218-220 of the text.
Improvement in image quality, particularly rejection of cosmic rays and overcoming cosmetic defects such as bad columns, is possible with image stacking. Here, instead of one long-exposure image, a number of shorter exposures are taken, ideally of such a length that the Poisson statistics are dominated by sky noise rather than read noise. In this case, there is no significant noise penalty to breaking up the exposure. The telescope may be shifted slightly between exposures. Then the images are interpolated to a common sky coordinate system and added or averaged, incorporating a rejection schemem for discrepant pixels. For various applications, one may use a median or κ - σ rejection. Any feature present on only a single exposure will be rejected, while weak features will add coherently. Noise will add only quadratically, as expected from Poisson statistics. For example, for the optically corrected Hubble telescope this is the only way to distinguish cosmic ray events from faint stars (Lauer 1989, PASP 101, 445), and a WFPC2 exposure of 3 hours will have EVERY PIXEL contaminated by a cosmic ray at a level 3 σ or greater. Also, any residual flat-field errors due to dust on the filter or wavelength mismatch will largely be cancelled out in this way. A photographic analog of this process exists, which can cancel out some of the grain irregularities in single plates. One successively copies multiple plates onto a print or other emulsion, using star images to register each in turn (very carefully). The first exposure needs to be longer than the others to overcome the toe behavior of the characteristic curve.
Once the data are calibrated, there exists a whole armada of enhancement, restoration, and analyis techniques. The book by Gonzalez and Wintz distinguishes enhancement and restoration, with enhancement including manipulations designed to show the eye what is in an image and restoration to remove the effects of some known degradation (such as the deconvolution problem already discussed). Some useful kinds of enhancement (illustrated if you follow the links, by a red light WFPC2 montage of NGC 3368) include display with variable contrast and pseudocolor mappings, logarithmic, histogram equalization, or customized intensity scales, edge enhancements, gradient removal, median filtering or Fourier methods to emphasize or suppress various spatial frequencies, and convolution with various kernal functions to emphasize objects or a certain kind. The last has the paradoxical result that blurring an image can increase the detectability of certain objects (when the image is oversampled for the objects sought).
Image analysis can be quite problem-specific. As an example, consider first images consisting only of stars. Since stars are spatially unresolved, the information content of the image comes down to alist of positions and intensities with errors. Here, a program to fit PSFs to all possible stars, separating blends and determining errors, may be the optimal reduction, incorporating the a priori information that all objects look like the PSF. For galaxies, we may be interested in the 2-dimensional brightness perhaps as found by fitting ellipses to the isophotes after rejecting interfering star images or other galaxies. Or we may need a list of all statistically significant objects and some assessment of the probability that each is a star or galaxy, along with a matching to a catalog from an image in a different passband. We may be interested in the comparison of multiple images, in comparing images taken in different emission lines whose ratio is temperature-sensitive or mapping dust by comparing radio and optical emission from ionized gas. Specialized software packages exist for many of these tasks (DAOPHOT, GRASP, FOCAS), as well as general-purpose astronomical packages such as IRAF, MIDAS, IDL, and AIPS. Considerable thought may be needed to find the optimal way to extract some desired information; once again, the more you already know about the objects, the better tailored and more powerful the procedure can be. Be guided by as much understanding of the data as possible; it is valuable to inspect them in as much detail, and in as wide a variety of ways, as possible. You don't want to miss any Nobel-quality material through sheer inattention...
2006 © 2000-2006