by Emily St.
Over the last two and a half years, I have been teaching myself how to photograph objects in the night sky, usually through my telescope, a hobby called astrophotography. Since I have begun sharing my amateur astrophotography, people have asked me what a photographed subject “really” looks like.
Sometimes the question comes from a place of dissonance between the night sky as remembered and the photograph as seen, especially with wide-field photography. Most people have little experience with the telescopes and specialized cameras used to capture the photos in the first place. On top of that, computers and advanced post-processing have becoming increasingly important intercessors between us and the spectacular astronomical images they produce. Digital photography has made photo enhancement cheaper, faster, and more flexible, allowing for better contrast, detail, and color enhancement, along with features such as software panorama stitching.
In the end, the motivation for the question is simple. The person asking wants to vicariously approach some truth of the observation where the photo may fall short. They want to know what it was like to stand at the telescope and look up.
I admit, the question—what does that really look like?—is not entirely misplaced. However, the idea the truth of how something “really” looks has problems. Interestingly, this means that the question itself may reveal more than the asker supposes.
I believe truth is an elusive idea with respect to photography, especially astrophotography, where the optical system pushes light to its limits. Instead, the astrophotographer must assemble facts together, with intentionality, to depict the subject.
In my journey as a hobbyist astrophotographer, my first photos taken through the telescope frustrated me: they looked blurry and indistinct. It felt like using a camera that I could never quite bring into focus. Over the past couple of years, I have learned techniques to allow me to resolve and depict subjects in photos with greater clarity and vividness than I could when I began. Some of these techniques involve gathering as much information as possible from multiple photos and combining them to get a clearer picture.
In this article, I will illustrate the limitations of light itself and the compounding limitations of our vantage point as observers underneath an atmosphere using a telescope—each adding its own distortions. Then I will examine the theoretical underpinnings of techniques amateur astrophotographers use to deconstruct and surmount these limitations, using lucky imaging and image stacking.
With these techniques, I learned how to gather more facts to compose clearer photos which, with a lot of practice and effort, hew a little closer to the truth as it passes through the lens.
The Limits of Light
In our day-to-day experience, we regard light as a faithful thing. Sighted individuals may rely upon light to show the world on every quotidian scale, from the threads in a woven fabric to a distant mountain on the horizon. We rarely bump up against the limits of detail, other than eyestrain.
However, artificial magnification may overreach these limits. Light microscopes and light telescopes both suffer from absolute upper confines on magnification due to light’s wave-like nature. Such instruments use optical systems which bend light, and when bent beyond what the wavelength of the light permits, distortions overwhelm the image.
In magnification, the ability to see tiny objects as distinct depends on the ability to resolve them—that is, to distinguish each object from other adjacent ones. Because light propagates as a wave, limitations on resolution are due to its wave-like nature. To resolve an object, light must propagate to that object and reflect back to the viewer with enough detail to allow resolution.
To better imagine how light waves interact with objects, consider tossing a rock into a very still pond with a dock on the other side. By watching the angles at which the waves bounce back, it is possible to read in them the rough location and size of the dock. However, small features, like a reed poking up out of the water, would be too small for the waves to report meaningfully.
Now consider a light microscope, which focuses light waves. Visible light occupies wavelengths between 390 to 700 nanometers. In a light microscope, this means that for features smaller than about 200 nanometers, for example, green light waves (at 550 nanometers) cannot distinguish them. The waves bowl over the small details. Like massive container ships trying to skirt a craggy coastline, they are too big and clumsy to hug the tiny curves around their crevices and prominences to report back their intricacies faithfully to the observer.
An optical system (like a light microscope) which is limited in this way is said to be a diffraction-limited system. This means that the instrument’s optical system has reached its theoretical limit, beyond which the wave-like nature of light prevents it from going further. Diffraction limitation does not place a limit on the highest magnification possible directly, but on the smallest angular resolution distinguishable. This is to say, if the field of view is considered as an angle, measured in degrees, minutes, or seconds, then the angular resolution is a fraction of that, also measured in the same units (though usually in tiny fractions of a degree, such as arcseconds).

Near the diffraction-limited scale, all point-like objects are seen as Airy disks, which are circular blurs with Airy patterns of concentric circles radiating outwards. (Airy disks are named for George Biddell Airy, a mathematician and astronomer who described them in 1835.) A simulation of an Airy pattern can be seen at the left. Again using green light as an example, those objects smaller than 200 nanometers have Airy disks generally too small to distinguish from one another and from larger disks when seen under green light. Higher magnification cannot help, nor can more precise focus—the disk is the least blurry that green light will allow.

The precise limits on angular resolution vary depending on the aperture (opening size) of the optical system and the wavelength of the light. A mathematical relation known as the Rayleigh criterion can give an estimate of the diffraction limitation of a real system, not taking into account aberrations or other distortions.
The image to the right shows a simulation of two points of light seen passing through a circular aperture at different degrees of separation. The points of light are seen as Airy patterns, with Airy disks in the middle and concentric rings radiating outward. In the top image, the points are easily distinguished. In the middle image, the points are approaching the Rayleigh criterion at the simulated wavelength but can still be distinguished. Finally, in the lower image, they are basically indistinguishable.
The Limits of Optical Systems
Optical systems—microscopes, telescopes, camera lenses, and so on—never actually achieve the theoretical diffraction limit. There are always slight imperfections in the optics themselves, called aberrations, and in the refractive media through which light passes, such as the air.
Telescopes, as optical systems, share all the theoretical limits on magnification as microscopes. However, because they are far larger, and because they must look through the entire atmosphere, their systems are subject to much larger distortions beyond the diffraction limit.
First, the optics themselves inevitably have at least some degree of aberration, both by design—because an optical system without aberration is almost impossible to build for astronomy—and by accident. Optical systems used for imaging incorporate inevitable compromises into their designs about which forms of aberrations are deemed acceptable, balancing the intended use and price as factors.
Almost every telescope in use today uses multiple specially shaped mirrors, to magnify, gather, and direct light. Any slight, accidental imperfections in their shapes or misalignments respective to one another will cause further aberrations in the overall system—and in the resulting image. Consumer-grade telescopes are particularly prone to collimation errors, which are misalignments of their optical components.
As a consequence, these sources of aberrations add their own distortions on top of the inherent diffraction distortions. Due to these aberrations, each Airy disk seen through the telescope further distorts into a more complex shape spreading irregularly from the center of any point-like source of light. The distortions add together.

Both of these sets of distortions may be thought of as a mathematical function taking a point of light and transforming it into a complex disk shape. Each of these disks can be mathematically described as a point-spread function, or PSF. Each PSF describes how the perfect point of light that enters the telescope becomes transformed—or convoluted. A PSF can also describe a combination of all the component distortion PSFs which combine to create the overall distortion a telescope sees.
For example, when the Hubble Space Telescope was first launched, scientists discovered its primary mirror had the wrong shape (too flat by 2.2 micrometers). As a result, it had an overall PSF that distorted point-like sources heavily and made certain kinds of observations almost impossible. A later mission corrected the aberration in its optics by introducing a lens with the opposite distortion to cancel out the misshapenness of the mirror, allowing it to function.
The Limits of Atmospheric Seeing
Almost every telescope is Earth-bound, though. For us on the ground, far more important than optical aberrations are the literal miles of shifting air through which the telescope must observe. Astronomers refer to atmospheric conditions as seeing, and so they refer to the consequent distortion disk as the seeing disk which further complicates the baroque pattern caused by the Airy disk and the distortion inherent to any optical system. This seeing distortion disk is far larger, completely unpredictable, and constantly shifting and changing up to a hundred times a second during observations. The seeing disk may distort a point dramatically, such that it is nothing more than irregular speckles.
Atmospheric seeing predominates all other distortions in the final PSF. While the theoretical angular resolution of a diffraction-limited telescope may be a hundredth of an arcsecond or less, and optical distortions may contribute a little upon that, the seeing disk limits angular resolution to a much larger (worse) resolution. This resolution is known as the full width at half maximum, or FWHM. It might be as little as half an arcsecond under the very best conditions at high altitudes or five or more arcseconds under bad conditions. This can make the difference between seeing two close together stars or one blurry star (such as Zeta Aquarii, a binary system of stars only two arcseconds apart). In other words, this is like going from 20/20 vision to 20/200 vision.
Clearly, all this distortion limits what we can see from the ground through telescopes. It would be to our advantage to limit or minimize how many of these distortions we encounter in the first place. Optical systems have improved over the centuries since the telescope was first invented, minimizing inherent aberrations and optimizing the remaining ones for specific situations. Many ways of coping with atmospheric distortions have emerged, including better observation locations, adaptive optics, and even space telescopes which don’t have to look through an atmosphere.
The next best thing, though, to owning a space telescope or moving to a mountaintop would be if we could analyze and deconstruct the PSF resulting from these distortions. This article isn’t about better optics or better cameras. It’s about taking imperfect, noisy data and attempting to mine it for a clearer signal.
Using Lucky Imaging and Image Stacking
Astronomers use two techniques combined to deconstruct the atmospheric distortions, called lucky imaging and image stacking. They essentially involve taking many images, culling them down to the best, and averaging them together to make a single, higher-quality image. In outline, the process goes as follows.
First, using a camera (often with a telescope for smaller, dimmer objects) and a tracking mount (which ensures each photograph captures the same portion of the sky), an astronomer takes many photographs. The number varies, but often several hundred are used for planetary images, for example. Ideally, the evening’s conditions will have good atmospheric seeing.
Next, software determines the quality of each image. Identifying the quality of an image involves complex statistical techniques that varies from situation to situation and from one software program to the next, but generally quality algorithms attempt to determine how sharply defined each image is, sometimes guided manually by landmarks supplied by the user. For example, some freeware and commercial programs aimed at hobbyists allow users to mark points of interest, edges (such as planetary limbs), and so on.
Based on the quality of the corpus of images, the astronomer then can elect to keep only a small number of the images (such as 10%) or more (such as 50% or more). This is where the lucky part comes into play—with any luck, the camera caught some images at moments when the atmosphere stood a bit more still. By identifying the highest quality images and culling the poorest, we already have boosted the signal (the image) against the noise (the distortions).
Finally, the photographs are layered upon one another, cropped, rotated, aligned so that they are identical, and averaged. At this point, the randomness of the atmospheric seeing distortions will have been superimposed at each point in the image. The idea is that in the averaging at each point in the image, the underlying image will statistically strengthen, and the random distortion will weaken—specifically because it is random. This works if the best images are used, taken under ideally identical conditions, and aligned perfectly.
This averaging function still leaves a somewhat indistinct image containing an average of all the random noise from the atmospheric distortions, so it needs one more processing step. Earlier, I said that images distorted by point spread functions (PSFs) are convolutions, or mathematically well defined transformations, of the true object. The average of the atmospheric seeing can be considered as another PSF, but one which is more regular and which can be deconstructed more easily than random noise. Using a deconvolution algorithm, it’s possible to further strengthen the signal and deemphasize the noise. To be sure, it is a kind of mathematical guess at what the image would have been, had the atmosphere not been there in the first place, but it’s one informed by many photos, and therefore the guess is well informed and quite a bit stronger than any single photo would be.
The particular PSF needed to perform the best deconvolution is often a matter of trial and error at first—this is a blind deconvolution. Software programs used to configure the deconvolution will sometimes have an option to do a wavelet transform, which is a kind of mathematical transformation that allows applying some kind of deconvolution algorithm (usually configurable). In any case, the configuration of its parameters will vary a great deal depending on the particular algorithm, the object itself, the conditions under which the images were taken, and the quality of the images used.
What the software programs often provide by way of configuration usually entails asking for a specific algorithm (such as Richardson–Lucy deconvolution), how many iterations to perform (since many of these algorithms must be applied repeatedly to achieve a desired result), setting the kind of distribution of photon noise (Poisson or Gaussian), size of the deconvolution kernel (a matrix used to apply a multiplier effect to each pixel), and so on. Some implementations may run iterations using multiple layers at different sized deconvolution kernels to emphasize differently sized features. Some may not expose any of these details, or may expose many more.
In a sense, applying a deconvolution is analogous to attempting to hammer flat the complex distortion disk created by the atmosphere in the first place. Because the original distortions are essentially random and vary from one image to the next, it’s not possible to reconstruct the original PSF entirely, but with statistics, the image stacking process works around it by gathering as much signal as possible to identify what’s noise. That way, we build a new PSF that’s easier to invert, and with a little luck, the detail we couldn’t see before will pop out.
Drawing the Moon
The techniques above, lucky imaging and image stacking, are used together to obtain the final result. They may be used for any kind of astronomical imaging at all—planets, stars, deep-sky objects, or even the Moon.

I made the image comparison above using images I photographed in July 2017. The left side shows a close-up detail from an individual snapshot I took that night. The right side shows the same detail from a composite made from fifty-eight such snapshots. (It has also been processed for color contrast.)
The unprocessed left-side close-up represents the absolute best focus I could achieve that night with my telescope and camera. It’s an “honest” photo inasmuch as I haven’t processed its colors, sharpened it, or changed it in any way besides the crop for this comparison. However, I don’t think it’s the “truth” because it doesn’t remind me of what the Moon looked like that night.
My brain doesn’t take still snapshots, so it doesn’t remember a still image. It doesn’t remember the Moon looking that blurry and flat. It doesn’t remember the color that way, either. I saw the Moon shimmer in the summer air. My brain also filtered out some of that distortion and saw detail that the photo didn’t capture. The real experience felt more dramatic, maybe because the Moon was so much brighter at the time—telescopes gather so much light from the Moon they can sometimes leave spots in your eyes after a moment.
The right side reminds me more of what I saw. The contrast I remember is there. The light hits the craters sharply and makes them feel like three-dimensional things. It feels more like a real, textured object, the way the telescope makes it seem in the moment.
What’s more, the image on the right reveals things the left one simply cannot. These are not fabricated details—they reflect the precision of the Moon’s surface better than the individual snapshots possibly could. Even though no individual snapshot I took that night contains that additional detail, together, they combined to evince it.
In making the right-hand image, I wanted to show others the Moon the way I experienced it, that feeling of being right up close. After the processed image popped out, it looked to me the same as it had through the telescope that night. My very first thought was that I finally had a real image of the Moon to show.

Emily St. is a software engineer who does astrophotography as a hobby in her spare time.