Interferometry has proven to be crucial in attaining high angular and spectral resolution for certain classes of observation, particularly in mapping at radio frequencies and measuring the angular sizes of stars. We will first develop the properties of a two-element interferometer, relate its response to the two-dimensional Fourier transform of the image on the sky plane, and discuss recovery of a genuine image from multi-element interferometer measures.

A schematic interferometer consists of two detectors some vector
distance *d* apart, whose phase-sensitive outputs are combine to yield the
interferometer response. For simplicity, now consider a plane coherent
wavefront (i.e. from a distant point source of radiation) propagating
in some direction **k** with wavelength λ. The wavefronts
reaching the two detectors will have a phase difference given by
2 π **k** · *d* / λ, so the combined output will be
the incoming intensity modulated by this phase difference.
For example, every time 2 π **k** · *d* / λ = 2 π
*N*
for some integer *N*$, the correlated product of the responses will
be maximized, and whenever it is 2 π
)*N*+1/2), it will be zero.
Thus as we track a point source over time with a baseline fixed on a
rotating platform (that is, the Earth's surface), we will see
a pattern of fringes in a time-response diagram. This also holds for
a non-coherent monochromatic wavefront, since we deal with interference
of the same piece of the wave train as it passes each detector.
The derivation in the book takes a slightly different approach considering
correlated and uncorrelated terms in the output; uncorrelated noise is
strongly suppressed in the time average since we average over very large
numbers of wave times (inverse frequency), often greater than 10^{9}.
This is why Air Force planes can use interferometers for maneuvering
practice without the astronomers getting too upset.

The resolution of the interferometer is set by the angular spacing of
this fringe pattern, which looks very much like the Rayleigh
resolution criterion
Δ θ = λ / 2 *d*. For a
multielement system, this is to be understood as the longest
applicable baseline. The precise meaning of "resolution" depends
on how the Fourier plane is sampled and how the image is reconstructed,
especially whether the output map is made by weighting all *measurements*
equally (so-called natural weighting), or by weighting equal areas in
the Fourier domain equally as long as they are sampled by any measurements
at all (uniform weighting).

The sky doesn't quite consist of perfect point sources, so we will consider
the generalization from a plane wavefront to one produced by an extended
source or ensemble of sources (which are, for finite angular resolution,
the same thing). By the superposition principle, the wavefront we detect
is the sum of the wavefronts from all differentials of solid angle
*d* Ω
or in the truly continuous case, an integral. As derived in the text (pages
224-226) for a pair of point sources, multiple sources modulate the
output since the two sources may now cause constructive or destructive
interference when observed at various angles to the baseline. In general,
the observed correlated intensity at a given time is the integral of
the source intensity multiplied by the fringe pattern for a given two-element
interferometer, which is exactly one component of the two-dimensional
Fourier transform of the source distribution - specifically that with
spatial frequency equal to the angular fringe spacing and in the
direction corresponding to the baseline as seen by the wavefront.
This is a completely general theorem (p. 238), qualified only by
the response pattern of the optical system feeding the detectors
(the so-called primary beam). This principle gives rise to imaging
by aperture synthesis: for a nonvariable source, we can measure different
Fourier components at different times, letting the Earth's rotation
carry the baselines to different parts of the spatial-frequency plane.
This is often denoted the *(u,v)*-plane, defined as existing
in the spatial frequency (or wavelength) domain on a plane normal
to the wavefront's propagatiion and of course including the center
of the interferometer. The idea is to sample the *(u,v)*-plane at
enough points to define the source distribution unambiguously.
In principle, this requires infinitely dense sampling, or more realistically
sampling to the interferometer resolution, but in practice the structure
of real sources allows a much sparser sampling. The important thing
becomes avoiding any major gaps in coverage.

To fill the *(u,v)*-plane faster, current interferometers have many
elements; for *n* elements, there are * n (n-1)/2* distinct baselines, since
each pair contributes a baseline in one direction (response of the other
is obtained by conjugation). For example, the VLA with 27 antennae gives
351 baselines, and can get a rough image almost instantaneously in so-called
snapshot mode. A linear array of antennae cannot do so, since its
*(u,v)* coverage is always a straight line and any information perpendicular
to this takes waiting on the Earth's rotation. The *(u,v)* coverage
depends both on the arrangement of elements and the track of the
source in the sky (hour angle range and declination). For sources far
from the pole, the resolution north-south is reduced since these
baselines are always working at a large angle to the wavefront.

To map a source with an aperture-synthesis system, a *dirty map*
is constructed by the inverse Fourier transform of the calibrated
correlated outputs from all baselines and all sample times (perhaps
after rejecting those obviously compromised by electronic interference).
This gives an unrealistic representation to the extent that parts of the
*(u,v)*-plane were not covered, and represents a deconvolution problem.
Deconvolution in this application corrects for the unobserved parts
of the Fourier domain by making some reasonable assumptions about the
behavior of the source - spatial and frequency continuity or
minimal structure, for example. CLEAN and maximum entropy are popular
for this. It is good practice to perform these operations in the
measured domain; the image itself for optical data, the Fourier domain
for interferometer use. A few simple patterns can be recognized by plotting
response versus baseline length (so-called visibility plots): a point source
is constant, a Gaussian transforms into a Gaussian, and a pair of sources
shows a beat pattern related to its separation as sampled by the
baselines. More complicated sources require detailed image construction.

Up to tens of km, physically wired connections can be used to feed a
correlator (the scale of the
Westerbork or
VLA arrays). Beyond this,
the antennae must be radio-linked with a precision time reference
(as in the MERLIN system) or widely separated with coordinated
atomic time standards to allow phasing of outputs afer the fact
(VLBI). VLBI has now been carried out with baselines of several
Earth diameters, first using a communications satellite as the remote station
and now by the dedicated 8-m HALCA antenna.
This technique, with resolution measured in microarcseconds, has
revealed the tiny scales of quasar cores. Dedicated arrays (such as the VLBA
and the Australia Telescope) can obtain VLBI maps with *(u,v)* coverage comparable
to a full synthesis. Earlier VLBI maps often suffered from very sparse
sampling (due to having only a handful of available antennae), so that
there is still an art to mapping structure on these scales. In particular,
sparse VLBI experiments often returned results which were realizations
of a model with a few fitted parameters rather than direct observation.

We have derived all these results for monochromatic radiation. Interferometery requires a narrow bandwidth, to avoid further interference effects due to pathlength differences induced solely by wavelength differences (and in principle the source spectrum as well). In mapping, finite bandwidth can induce a radial smearing due to the fact that the angle - fringe transform is smeared by bandwidth. Note also that the sampling of various spatial frequencies may be highly non-uniform, so that it is hard to get adequate representation over more than about a factor of 10 in spatial frequency.

Many of these same considerations apply to optical interferometry, except that we can no longer use phase-sensitivy techniques but must simply combine beams and look at the intensity. An example is the Michelson stellar interferometer, where one changes the baseline until the fringe pattern disappears because the fringe spacing has dropped below the star's angular diameter. This is making a comeback these days, with ground-based stellar interferometers (the CHARA, NPOI, and COAST systems) routinely imaging simple sources and considerable promise shown by spaceborne interferometers. Using photon counting to measure the output of a wave process produces interesting wrinkles, such as a bias in measurements caused by the positive and quantized detection of photons.

Speckle interferometry is a complicated relative. First proposed by Labeyrie in the 1960s, it uses the atmosphere's turbulent cells to break the telescope into subapertures. Speckles occur when several cells happen to be in phase and interfere constructively. Thus, the speckle pattern of a double star in the pupil plane is the fringe pattern of beats between the components, and for more complicated sources again Fourier tricks are needed. This is tough in general, since we don't have phases for the components. Clever tricks involving holography for phase reference and photon time-tagging can be used to allow simple reconstruction.

2006 © 2000-2006