AAVSO DSLR Observing Manual v1-2
AAVSO DSLR Observing Manual v1-2
AAVSO
49 Bay State Road
Cambridge, MA 02138
email: [email protected]
Version 1.2
Copyright 2014 AAVSO
Foreword
This manual is a basic introduction and guide to using a DSLR camera to make variable star observations.
The target audience is first-time beginner to intermediate level DSLR observers, although many advanced
observers may find the content contained herein useful.
The AAVSO DSLR Observing Manual was inspired by the great interest in DSLR photometry witnessed
during the AAVSO’s Citizen Sky program. Consumer-grade imaging devices are rapidly evolving, so we
have elected to write this manual to be as general as possible and move the software and camera-specific
topics to the AAVSO DSLR forums. If you find an area where this document could use improvement,
please let us know. Please send any feedback or suggestions to [email protected].
Most of the content for these chapters was written during the third Citizen Sky workshop during March
22-24, 2013 at the AAVSO. The persons responsible for creation of most of the content in the chapters
are:
Chapter 1 (Introduction): Colin Littlefield, Paul Norris, Richard (Doc) Kinne, Matthew Templeton
Chapter 2 (Equipment overview): Roger Pieri, Rebecca Jackson, Michael Brewster, Matthew Templeton
Chapter 3 (Software overview): Mark Blackford, Heinz-Bernd Eggenstein, Martin Connors, Ian Doktor
Chapters 4 & 5 (Image acquisition and processing): Robert Buchheim, Donald Collins, Tim Hager, Bob
Manske, Matthew Templeton
Chapter 6 (Transformation): Brian Kloppenborg, Arne Henden
Chapter 7 (Observing program): Des Loughney, Mike Simonsen, Todd Brown
Various figures: Paul Valleli
i
Index
1. Introduction
1.1. Prologue
1.2. Target audience
1.3. The what, why, and how of DSLR photometry
1.4. Visual vs. DSLR vs. CCD observing
1.5. Are you ready? (Prerequisites)
1.6. Expectations
2. Equipment Overview
2.1. What is a DSLR?
2.2. Lenses and telescopes
2.3. Tripods and mounts
2.4. Camera settings
2.5. Filters and spectral response
3. Software Overview
3.1. Minimum requirements for DSLR photometry software
3.2. Useful software features
3.3. Optional features
3.4. Software capability comparison chart
3.5. Other useful software
4. Image Acquisition
4.1. Acquisition overview
4.2. Preparatory work
4.3. Noise sources and systematic biases
4.4. Calibration frames (bias, darks, and flats)
4.5. ISO and exposure times
4.6. Finding and framing the field
4.7. Acquiring science data and tricks of the trade
5. Image Assessment, Image Processing, and Aperture Photometry
5.1. Overview
5.2. Processing preliminaries and image assessment
5.3. Application of calibration frames, co-registration, stacking, and binning
5.4. RGB channel extraction
5.5. Post-calibration assessment
5.6. Aperture photometry
5.7. Differential photometry
6. Photometric Calibration
6.1. Standardized photometry
6.2. Transformation
6.3. Submitting your results
7. Developing a DSLR Observing Program
ii
7.1. Deciding what to observe
7.2. What are some good stars to begin with?
Appendix A: Determining Optimal Exposure Times and Saturation Limits
Appendix B: Linearity Check (DFC) and Characterizing the DSLR
Appendix C: Calibration Pre-assessment: Testing Dark Images for Hot Pixels
Appendix D: Testing Flats for Uniform Illumination
iii
Chapter 1: Introduction
1.1 Prologue
You're out jogging one night in your usual park. It's dark, but you know the area well, so you
don't have a fear of running into anything. Tonight turns out to be different, though. As you jog
you notice someone just off the path with a camera on a tripod. Strangely, you notice that both
she and her camera are pointed up, right up to the sky above. You glance up at the sky as you jog
past seeing just the brightest points of light in your light-polluted sky. What could she be doing?
In this case, the woman, who is a High School history teacher by day, is performing
measurements on the brightness of certain stars; data that will be of use and interest to
professional astronomers. She’s one of a growing breed of people called “citizen scientists.” This
manual will show you how you can participate as well.
Most of us who have a passing interest in astronomy, perhaps follow an astronomy magazine every so
often, have seen the stunning photos that grace their pages. Most of these pictures are taken with cameras
attached to guided telescopes and processed so that they look as good as they do. That is the realm of
astrophotography. This manual will take you in a different direction. Here we're going to take a look at
how you can take scientifically valuable photographs to measure the brightnesses of variable stars ― stars
whose brightnesses change over time. The goal of this manual is to guide you through the process of
using the same DSLR camera that you use for general photography to contribute scientific quality data to
the astronomical community.
All stars change in brightness due to the physical processes happening inside, on, or near the star. By
carefully observing this variability, it is possible to learn a great deal of information about the star and,
more generally, astrophysical phenomena. In a very real sense, therefore, variable stars are like physics
laboratories. The same fundamental physical processes that operate here on Earth – gravity, fluid
mechanics, light and heat, chemistry, nuclear physics, and so on – operate exactly the same way all over
the universe. By watching how stars change over time, we can learn why they change.
1
Although many stellar variations cannot be reliably detected from the ground (more on that in a later
chapter), there are still hundreds of classes of variable stars, each with a few to several thousand known
members. For example, stars may change in size, shape or temperature over time (pulsators), they may
undergo rapid changes in light due to physical processes around the star (accretors and eruptives), or they
may be eclipsed by stars or planets in orbit around them (binaries and exoplanets). The key is that
something is physically happening to the star itself or in its immediate vicinity. (You may see a star
twinkle in the sky, but that variation is due solely to the Earth's atmosphere and is completely unrelated to
the star.)
Different kinds of stars vary on different timescales. Some stars may take weeks, months, or years to
undergo changes that we can detect. Others take days, hours, minutes, seconds or even less. Some stars
vary regularly, and we can see patterns in the variations that repeat over time. Other stars undergo chaotic
changes that we can never predict exactly. Some stars vary the same way for centuries, while others –
like supernovae – may flare up briefly and then disappear, never to be seen again.
Figure 1.1 DSLR observations of epsilon Aurigae during its 2009-2011 eclipse. Each data point on this
plot was contributed by an amateur astronomer.
Variable stars also have a range of apparent brightnesses (how bright they appear to us) as well as a range
of intrinsic luminosities (how much light they actually give off). A star may be intrinsically luminous,
but if it is thousands of light years away, it will appear to be faint. Variables also have a range of
amplitudes―that is, how much their brightness changes over time. Some variable stars can vary by ten
magnitudes or more, which is a factor of ten thousand in brightness, a huge change! Some variable stars
vary by a millimagnitude, or even less, and their variations may be impossible for you to detect. There
are many stars in between, and there's no shortage of targets that you'll be able to do productive work on,
2
regardless of your equipment. With this manual, you’ll learn how to use your DSLR to obtain valuable
scientific measurements of these stars and report your findings so they can be used for scientific research.
So how do amateurs fit into this picture? Professional astronomers extensively use photometry, but
because they have only limited observing time, they frequently depend on amateur astronomers to
perform photometry on interesting objects for them. As a result, your observations provide the raw
material which powers scientific inquiry. Scientists can speculate endlessly about why things appear and
behave the way they do, but ultimately, those hypotheses must be tested in order to productively advance
our scientific understanding. If you give researchers reliable data, they can make accurate models to
describe how the universe works, and our understanding improves and expands. For example, amateurs
used off-the-shelf DSLR cameras to regularly monitor the brightness of Epsilon Aurigae, a notoriously
enigmatic binary system, as it underwent a long-awaited eclipse between 2009 and 2011 (see Figure 1.1).
Thanks to the work of these amateurs, professional astronomers received a wealth of useful data from
which they were able to glean new insights about the
mysterious binary.
3
Another option for an observer is to use a CCD camera attached to a telescope. At a superficial level,
CCD photometry is similar to DSLR photometry. Professional astronomers use CCDs for photometry
because they offer superior image quality and versatility, but good CCDs tend to be significantly more
expensive than DSLRs and have a steeper learning curve, too. In terms of cost effectiveness, DSLRs will
generally offer much better value than CCDs. The AAVSO has published a CCD observing manual
which comprehensively describes CCD photometry.
1.6 Expectations
In general, this manual will focus on the aspects of observing variable stars with DSLR cameras.
Although we use the word “DSLR” extensively throughout this text, we are using it to refer to a general
class of camera that is suitable for conducting photometric observations. Recently, many point-and-shoot
cameras have started to support several features that are required for doing astronomical photometry.
Hence, the text discussed here may be applicable to your camera, even if it is not a DSLR.
In this manual we focus on variable stars because stars are among the easiest objects to measure. The
techniques you learn are applicable to a wider range of objects (like exoplanet transits, and active galactic
nuclei), but they may not be as accessible without a more substantial investment. With a few exceptions,
we won’t go into the details about how a DSLR works or how to operate any specific model of camera.
Also, please realize that the techniques used in DSLR photometry are similar, but not identical to
astrophotography. In particular, de-focusing in DSLR photometry will result in blurry images that are not
pretty to look at, but scientifically valuable.
4
Figure 1.3 What you should expect to see in DSLR photometry. Image from the AAVSO CCD observing
manual, courtesy of A. Henden (USNO/AAVSO).
It is the goal of this manual to demystify the process of obtaining scientific-quality photometry with
DSLR cameras. Many amateur astronomers are unnecessarily intimidated by photometry’s supposedly
steep learning curve, and with DSLRs, it is possible to start taking useful data almost immediately.
Although it is true that obtaining good photometric data requires careful data analysis and attention to
detail, photometry is a field which is readily accessible to amateur astronomers who lack a technical
background. Enthusiasm, patience, and good technique, rather than in-depth mathematical or scientific
aptitude, are all that is required.
"I feel it is my duty to warn others...that they approach the observing of variable stars with the
utmost caution. It is easy to become an addict, and as usual, the longer the indulgence is
continued the more difficult it becomes to make a clean break and go back to a normal life.”
Leslie C. Peltier (1900-1980)
References
AAVSO Visual Observing Manual: http://www.aavso.org/visual-observing-manual
AAVSO CCD Observing Manual: http://www.aavso.org/ccd-observing-manual
5
Chapter 2: Equipment Overview
DSLR cameras provide one of the most economical ways to become involved in digital photometry.
There are fundamentally three things that are required: a lens or focusing device, a camera capable of
providing images in an unprocessed format, and something to stabilize the camera during long exposures.
These devices can be as simple as a suitable point-and-shoot camera on a fence post, or as elaborate as a
professional-grade camera looking through a telescope. Prior to discussing how one conducts the
observations and reduces the data, it is best to first understand exactly what equipment is required to do
DSLR photometry. As we will be discussing each of these three components in detail, we have taken the
liberty of explaining some of the physical aspects of the camera so that you may better understand what
happens when you adjust various camera settings.
The fundamental difference between a DSLR camera and a SLR camera is that in a DSLR camera the
image is recorded electronically via a sensor into a file, not on film as in a SLR camera. As illustrated in
Figure 2.1, a DSLR camera is made from an ensemble of optical and electronic components that are
needed for capturing images. Many modern DSLR cameras also come with a plethora of settings and
software processing options, most of which are not useful for astronomical photometry.
Figure 2.1. A cutaway of a DSLR camera showing the various components involved
6
2.1.1. Optical path
The camera consists of a lens attached to the front of the camera body, a shutter, several large filters, a
microlens array, additional filters, and a detector. The optical components in which we are most interested
are shown schematically in Figure 2.2. The first optical component is the lens. Its primary purpose is to
project and focus an image onto the sensor. Behind the lens is the F/stop diaphragm. This determines the
total aperture, or light gathering surface, of the lens. These components are typically contained within the
lens body itself. Within the camera body, the first element encountered is typically the shutter. The
purpose of the shutter is to control the light entering the camera. In a film camera the shutter is closed
except when opened to allow light to strike the film. In a DSLR camera, however, the opposite is true: the
shutter is open except when it is closed so that the light that has struck the sensor may be read. Behind the
F/stop diaphragm are a series of filters that eliminate unwanted (infrared and ultraviolet) light.
Figure 2.2. The typical optical layout of a DSLR camera with a CMOS detector and RGB Bayer array
(courtesy of Roger Pieri).
7
2.1.2. CMOS detector and Bayer array
Depending on which detector your camera uses, what lies between the filters and the detector can differ
dramatically. Most DSLR cameras have a CMOS (Complementary Metal Oxide Semiconductor) detector
with a Bayer array (see Figure 2.3) of red, green, and blue (RGB hereafter) pixels; there are often two sets
of green pixels, so the array can be thought of as RGGB. In front of the Bayer array is a low-pass filter
that spreads the light out to reduce the Moiré pattern caused by the Bayer structure. Behind this filter and
immediately in front of the detector, a microlens array focuses the light onto a single layer of sensitive
photodiodes in each pixel. Printed onto these pixels are pigment filters that split the spectrum into RGB
color channels. For more in-depth discussion of channels and filters, see Section 2.5.
Figure 2.3. Schematic showing a typical Bayer matrix color filter arrangement. The specific order of
colors may vary between manufacturers so it is important to determine which color channel in your DSLR
corresponds to red, which to blue, and which to green.
The voltage increase of a single photoelectric event is quite small, hence the accumulated voltage on the
capacitor is similarly tiny. In order for this signal to be read, it is first passed through an amplifier before
going to the analog-to-digital converter (ADC). The gain setting of the amplifier determines the “ISO” (a
measurement of the sensitivity of the detector) that matches the signal to the fixed range of the converter.
The ADU (arbitrary digital units) output of the ADC is proportional to the number of electrons collected
by the photodiode of each pixel. When saved as a RAW data file, these ADU values are the fundamental
information used in DSLR photometry. This discussion is continued in greater detail in Section 2.4.
Because all DSLR cameras on the market today have CMOS sensors, we will concentrate on this type of
device. For a discussion of CCD camera technology, please see the AAVSO CCD Observing Manual.
8
Cameras with Foveon detectors (which have three color-specific layers of pixels instead of a single plane
of different colored pixels) are not often seen. If you wish to know more about them, please ask on the
AAVSO DSLR Photometry forum.
Figure 2.4 shows a schematic representation of a CMOS detector. The sensor itself is made from a silicon
chip onto which the CMOS circuitry is etched. The photon-sensitive element in each pixel is a photodiode
(or a MOS photogate). These devices operate by the photoelectric effect, in which a photon impacting the
detector generates an electron-hole pair. Due to the construction of the photodiode, the electron is quickly
moved out of the bulk material and pushed onto a nearby capacitor. At the beginning of an exposure, this
capacitor is reset and its voltage read. During the exposure, each impacting photon results in a slight
decrease in charge on the capacitor. At the end of the exposure, the voltage on the capacitor is read a
second time.
Figure 2.4. Schematic representation of the components of a CMOS detector (courtesy of Roger Pieri).
Modern DSLR cameras have a plethora of additional functions, most of which are not useful and can even
be harmful when conducting photometric measurements. Foremost, JPEG images should never be used in
astronomical photometry. To generate a JPEG image, the RAW ADU values from the detector are sent to
a processor that converts the image into a non-linear sRGB color space (absolutely non-photometric) and
then compresses it into a JPEG file. The non-linearity and compression lead to a significant degradation
of data precision (from ~14000 levels of brightness to a maximum of 256 levels) that prohibits precise
flux measurement. Some cameras have a de-noising or image enhancement function that modifies the
underlying data, possibly corrupting the photometric data in the process. Functions that measure the
illumination of a scene, and autofocus, are nearly useless for photometry. The “liveview” magnification
function (5x, 10x, etc.) is useful to focus/defocus on a bright star, but the viewfinder (possibly with a
right-angle adapter) is often more useful when framing the desired sky area.
9
2.2 Lenses and telescopes
The first step in doing DSLR photometry is getting light into the camera. Starlight must be focused on the
sensor either by a lens directly mounted on the camera or by attaching the camera to a large, telephoto
lens or a telescope. Figure 2.5 shows a typical assortment of DSLR lenses.
The lens is the first element of the photometric chain. Lenses can be generally described by two
properties: aperture and focal length. The aperture surface determines how many photons may enter the
optical system within a fixed period of time. Larger apertures collect more light and permit fainter targets
to be observed. The focal length measures how strongly the lens causes the light to converge on the
detector. When combined with the size of the detector, the focal length determines the field of view
(FOV) of the instrument.
A large enough FOV – the area of the sky your camera can see - is needed to be able to include a good set
of comparison stars in addition to the target star. A short focal length lens has a wide FOV, thus it is well-
suited for measuring bright variables (bright comparison stars are farther apart than faint ones), and for
capturing many stars simultaneously for bulk analysis. The longer the focal length of the lens, the more
“zoomed in” you are, that is, you see a smaller area of the sky but in greater detail. Thus, for fainter stars,
a longer focal length lens or a telescope is needed. At a given F/stop (F/stop is a setting that determines
the area of the aperture) the sky background level is the same for all focal lengths, but the aperture surface
and the resulting numbers of photons reaching the detector are proportional to the square of that focal
10
length. Therefore, zooming in strongly increases your ability to measure fainter stars because the SNR
(signal-to-noise-ratio) over the sky background shot noise (more on those in Chapter 4) improves a lot.
What lens should you use? There are two approaches to deciding. The first is to use the lens that you
have, and select on targets that are compatible with your camera/lens. There are plenty of stars needing
attention, so almost any lens/camera combination can be put to good use. The second approach is to fall
in love with a particular star or project, and acquire a lens/camera setup that is a good match to the needs
of the chosen project. In either case, your choice of equipment will be a balancing act between several
lens parameters. These parameters include field of view, aperture size, focal length, limiting magnitude,
and achievable exposure duration.
Almost all DSLR variable-star projects use “differential photometry”, in which the brightness of the
target variable star is compared to the brightness of a nearby – constant-brightness – “comparison star”.
In order for this to work, you need to have both the target and the comp star in the same image field of
view, and the “comp star” should be roughly the same brightness as your target. If your target is bright
(say, a naked-eye star), then most likely you’ll need a FOV of several degrees (or more – maybe 10 to 30
degrees) in order to capture a comparable-brightness comp star in the same image as your target. A wide
FOV implies a short focal length, which in turn is nicely compatible with the standard lenses that come
with most DSLR camera kits.
If your target is faint, then you want to balance two approaches to achieving a high-signal image. You
can take a long exposure; or you can use a lens with a large aperture. Doubling the exposure doubles the
number of photons that you collect (all other things being equal); but this can be a problematic approach
as you move to fainter targets. You might be able to capture a nice image of a naked-eye star (5th mag,
say) in a 10-second exposure using your standard 50mm lens. But going after a 10th magnitude star
(which provides only 1/100th as many photons per second) would require a 1000-second exposure (nearly
17 minutes) which means that you need to accurately follow the sky’s rotation for that long exposure, and
also raises a host of other challenges. That 50mm standard lens probably has a collecting aperture
diameter of about 15mm (less than one inch) – not very large! By mating your camera to a telescope, you
can achieve a huge increase in collecting aperture. For example, a modest 6-inch aperture telescope will
easily provide 40 to 100 times the collecting area of a standard 50mm lens, thereby extending your reach
to that 10th magnitude star with little or no increase in exposure duration. Of course, the telescope is
likely to have a fairly long focal length (say 30 to 60 inches), and hence provide quite a narrow FOV.
This means that you are not likely to have a bright star within the FOV (but there’s a good chance that
you’ll have a few faint – say, 10th magnitude – comp stars, which is what you need for a 10th mag target).
The narrow FOV implies the need for a good tracking mount, but that is likely to be part of your telescope
setup, anyway.
So, there is a role for everything from “low end” kit lenses (fairly bright stars), to telephoto lenses (fainter
targets with appropriate comp stars within a few degrees), and telescopes (faint targets with one or two
comp stars within the narrow field of view).
Figure 2.6 shows how the view of Orion depends on the field of view, so you can determine the field of
view of your equipment by comparing your view to the diagram.
11
Figure 2.6. The main figure of Orion used to determine the field of view for an APS-C sensor-equipped
system at various focal lengths.
If you know the focal length of your lens and the size of the sensor in your camera, you can determine the
field of view from the equation or Table 1 below. Consult your manual to find the sensor size in your
camera. The most common DSLR and “hybrid” camera sensor size is the APS-C, which is 14.9 x 22.4
mm, but other formats also exist in cameras known to be used for photometry: the 4/3 system (13 x 17.3),
the 1” format of some hybrids (8.8 x 13.2), the 1/1.7” used in “expert” DSC (5.7 x 7.6). The “full frame”
format (24 x 36 mm) also exists, but it is not so common and is very expensive and subject to vignetting
problems. All of these cameras have a RAW image file mode.
12
The equation is:
For FL > 50 mm / APS-C: FOV = 57 * sensor size / focal length (deg)... easier on the field.
Table 1: Example of focal length needed to cover a given FOV for typical sensor sizes.
All APS-C 4/3 System 1” System 1 / 1.7” 1 / 2.3” Full Frame
dimensions
in mm 14.9 x 22.3 13 x 17.3 8.8 x 13.2 5.7 x 7.6 4.6 x 6.1 24 x 36
64 18 14 11 6 5 29
48 25 19 15 9 7 40
32 39 30 23 13 11 63
24 52 41 31 188 14 85
16 79 62 47 27 22 128
Blue cells: very expensive lenses, better to use a telescope connected to the camera body
Table 2 shows the area of aperture at F/4 for the focal lengths and sensor types in Table 1. The enormous
range of photon flux as a function of FOV and sensor size can clearly be seen. Thus, the F/stop
determines the ability of each configuration to access and measure a large range of magnitudes.
13
Table 2: Area of aperture at F/4 for focal lengths and sensor types in Table 1.
All APS-C 4/3 System 1” Format 1 / 1.7” 1 / 2.3” Full Frame
dimensions
in mm 14.9 x 22.3 13 x 17.3 8.8 x 13.2 5.7 x 7.6 4.6 x 6.1 24 x 36
64 16 9 5 2 1 41
48 31 19 11 4 2 80
32 74 45 26 9 6 193
24 135 81 47 16 10 352
The camera lens must be able to be focused manually; autofocus will not work when aiming at the stars.
For pretty astrophotos, you want to focus the stars into tight points, but for photometry, most of the time
you should defocus to spread the light out over a larger region of the sensor. The only exception is when
observing at the detection limit, because then the instrumental uncertainty due to noise is dominant and
minimizing the number of pixels involved improves it. When observing faint stars the general rule is to
balance the two sources of error: noise due to multiple pixels and undersampling of the star profile due to
the Bayer structure; this balance can be a challenge for the beginner observer. By contrast, when
observing bright stars the defocus should be large.
Many DSLR cameras come equipped with zoom lenses. These types of lenses can often perform well, but
there is a potential problem with them when they are used for photometry. If the zoom lens changes focal
length over the course of an observing session, the focus will shift and saturation or blending can occur,
and correct astrometry and image stacking can be more difficult. The shift could be due either to an
environmental effect such as temperature change or a physical effect such as the weight of the lens itself
as the camera moves from low to high elevation. All lenses can be similarly affected (You’ll learn about
all this in the image analysis chapter). You can use tape to prevent the focal length from shifting. Many
amateurs shoot with fixed focal length to achieve faster f/stops.
14
2.3 Tripods and mounts
The camera needs to be attached to some kind of mount in order to obtain images of good quality; a hand-
held camera will not provide enough stability to take data-quality images. There are a number of ways to
mount a camera, with a fixed tripod being the simplest and least expensive. It is also possible to mount a
camera equipped with a lens on an equatorial mount - a mount that follows the movement of the sky - or
to attach (or “piggy-back”) a camera onto a telescope that’s on an equatorial mount. Doing so has the
benefit of letting your camera point at exactly the same location in space as it moves across the sky during
the night. Finally, you can also attach a digital camera to a telescope focuser, in essence turning the
telescope itself into a lens for the camera. Which of these you use is a matter of personal preference and
resources. While you can obtain good quality data with any of these mounts, your choice of mount will
define what objects you can observe, and how you observe them.
A tripod consists of a standardized mounting point to which cameras or other optical instruments can be
attached. Your camera likely has a small threaded hole at the bottom into which a screw on the tripod
head can be inserted. It provides a way to keep the camera fixed and pointed at exactly the same place in
the sky, without being subject to motion (like the small movements of your hands and arms). The
limitation is that the stars move across the sky during the course of a night due to the Earth’s rotation.
This is acceptable, but will limit the exposure times that you can use so that stars are not trailed beyond
the limits that your software can measure.
An equatorial mount (or “equatorial drive”) is designed to follow the movement of the sky. Such a mount
typically replaces the fixed head of a tripod. Equatorial mounts are often used with telescopes, enabling
them to track the movement of the sky and follow the same object in space during the course of the night
without having to constantly adjust the telescope by hand. You can also mount a digital camera or other
optical instrument to equatorial mounts. Equatorial mounts have additional requirements: you need a
power source to drive the mount, and you’ll need to polar align the mount with the north or south pole
star so that it can track properly. In principle, a well-aligned equatorial mount allows you to use longer
exposure times than a fixed tripod can. Doing so will let you observe fainter stars, because the longer your
exposure time, the more light you can collect. Table 4 gives sample exposure details for driven mounts.
A “piggy-back” mount simply attaches the camera to an existing optical instrument, most commonly a
telescope on an equatorial mount. In this case, your main concern is how to attach your camera to your
instrument rather than to a mount. Some telescopes will have mounting hardware readily available (either
15
standard with the telescope or available commercially) but others may require that you design and create
your own mounting hardware. In any case, the main requirements are that the camera is securely and
safely attached to the telescope, and that it remains in place without slipping or shifting as the telescope
moves. You should also be aware that adding a camera to a telescope will change the weight and balance
of the mount, and may therefore require that you rebalance your equatorial mount.
Small devices specific for DSLR cameras also exist. They do not have a Declination axis but only a
motorized Right Ascension axis that follows the sky. The camera with its lens is mounted on this platform
using a ball head fixture. The assembly can be pointed to any direction of the sky and then follows it. This
device has no tripod of its own and is normally affixed onto a strong enough photo tripod. This solution
works well for a couple of minutes of exposure with a lens, not a telescope. The cost of it is much lower
than a good level mount and the equipment is light, much easier to transport and set up. A very low cost
solution would be to build a traditional “barn-door” mount. It is made from two plywood plates joined
with a door hinge controlled by a screw which is either rotated by hand or using a small motor/reductor.
The camera is mounted on one of the plates via a ball head fixture. The hinge axis is pointed to the North
(South) Pole. One final solution is to use a entry level equatorial mount like an EQ1 and equip it with a
stepper motor; it works up to 60-90 seconds with a 200 mm focal length lens. Overall cost should be
about $200 US. This is a light assembly, very easy to transport, and can be set up in a couple of minutes.
2.3.5. Caveat
Any of these mounts will allow you to take good scientific data, but using an undriven mount -- either a
tripod mount or an equatorial mount without drive or good polar alignment -- will require that you take
shorter exposures, usually less than 5-20 seconds (Table 3). This is because the sky will rotate slightly
across the field of view of your camera during exposures, resulting in trailed images. If the trails are too
long, the extra background pixels in the photometric aperture will increase the noise and lower the SNR
(signal to noise ratio). However, there is software that provides an elongated photometric aperture that
could fit the trail and provide superior results if the star is bright enough. Another limit of long trails (or
defocus) is the risk of blending of stars, in particular if a short focal length is used.
The next section will provide guidelines for exposure times based upon your camera optics and also
whether you are using a fixed mount (without tracking) or equatorial mount with tracking.
16
collect just the raw image. Your first step is to turn the mode dial to “M” to acquire manual control over
the exposure time and F/stop, described below.
The next step is to choose an appropriate F/stop. The F/stop is a number equal to the focal length of the
lens divided by the diameter of the aperture, the opening that lets light into the camera. The lower the
F/stop, the more light gets in, but sometimes there are lens defects that can be minimized by avoiding the
lowest F/stop. As a general rule you want to collect more light, so you want your F/stop to be a small
number, such as F/2 or F/4. If you go above F/7, you have probably stopped it down too much.
You will find an ISO setting on your camera; this setting determines the amplification of the sensor
output. Higher ISO is helpful when imaging faint stars, but with a bright star, high ISO increases the risk
of saturation, which occurs when a sensor pixel receives more photons than it can count. On the other
hand, a low ISO number avoids the saturation problem and allows for a wider range of brightnesses to be
measured. An ISO of 100 to 200 is typically recommended for bright stars. Higher ISO may be necessary
for fainter stars, depending on the aperture, exposure time, and number of pixels illuminated by the
starlight.
As mentioned above, the ADU output of the ADC is proportional to the number of electrons collected by
the photodiode of each pixel. The calibration factor e/ADU is inversely proportional to the ISO number.
For most common APS-C DSLR cameras having a 14-bit ADC, the ideal calibration factor of one
electron per ADU is reached between ISO 100 and 300, depending on the pixel size. Below that ISO
range, the finest data increment (quantization) is 1 bit on the ADC for several detected electrons, thus
sensitivity is lost. This quantization regime allows higher possible photometric accuracy and dynamic
range under a high flux regime (where the capacitor can be filled by electrons), but the detectivity is
limited to a couple of electrons. In modern cameras, the output of the converter is typically a 14-bit value,
which may include some coding shift (e.g. 1024 or 2048 for Canon cameras). Thus, out of the 16384
possible values represented by a 14-bit number, only approximately 14000 are usable. At ISO 400 and
above, the ADC output will record every electron collected by the photodiode. Thus, the total number of
electrons readable is altered (proportionally to the ISO number) by the way the possible dynamic range
and SNR are altered. Figure 2.7 shows electron linearity and saturation for the Canon 450D Green
channel at various ISO settings.
To this point we have assumed that only stellar photons are measured by the camera, but this is in fact an
oversimplification. The RAW output measured as ADU is proportional to the photon count of the star,
plus sky background, plus several sources of noise. The noise comes from intrinsic fluctuations of the
source, scintillation of the atmosphere, and the camera’s own electronic circuitry. In particular, some of
the ADU measured are in fact dark current caused by thermally generated electrons in the photodiode.
Most of the time, the contribution of dark current can be mitigated by taking a series of dark frames
(images where no light is permitted to enter the system) that will be subtracted from the RAW output.
Random amplification noise and shot noise from the mean dark current also contribute to the measured
signal. These terms are discussed in Chapter 4.
17
Figure 2.7. Electron linearity and saturation for Canon 450D Green channel at various ISO settings.
(courtesy Roger Pieri)
Now you will set the exposure time so that you can take photos at least several seconds long. The amount
of time you choose will depend on several factors, such as the brightness of the star, the F/stop, the ISO
setting, and whether you wish to avoid star trailing. If the star is faint, you need to expose long enough to
measure the brightness accurately. If the star is bright, a long exposure risks saturation. Since a lower
F/stop allows in more light, a lower F/stop also allows a shorter exposure. As the ISO setting is lowered,
the required exposure time increases. If your camera is mounted on a tripod, your exposure times are
limited to about 5-20 seconds (Table 3) to avoid having long star trails. If your camera is on a driven
mount, you can go up to about 60 seconds before worrying about the background brightness of the sky or
the accuracy of the drive. For long exposures, you may need to set the exposure time to “BULB” and use
a cable release to operate the shutter. You might choose to take multiple images of identical exposure
time and combine them in a software process called stacking. The combined exposure of stacked images
should total at least 60 seconds to properly average the variability of the signal due to the scintillation,
commonly observed as twinkling. This integration time is a function of the level of photometric accuracy
desired for the observation, the sky “seeing,” and the aperture of the instrument. The scintillation is strong
with small aperture and gets weaker when the aperture increases. It is another effect of the “seeing”, the
turbulence of the atmosphere.
The tables below give the faintest magnitude reachable under an excellent sky, at zenith, and 400 ISO
setting, using various optics at maximum aperture, without and with RA drive. The corresponding
18
exposure time and the saturation level are provided for a photometric aperture of 25 pixels at ISO 400 and
100. A much larger dynamic range can be reached using a larger defocus.
Zoom 18-55
55 5.6 76 20 s 8 5.1 3.7 15.3 x 22.8
/ 3.5-5.6
Zoom 70-
70 4 240 16 s 9 6.2 4.8 12 x 18
300 / 4-5.6
Tele 200
200 4 1963 5.5 s 10 7.3 5.9 4.24 x 6.36
mm F4
Zoom 70-
300 5.6 2254 3.7 s 10 7.1 5.7 2.8 x 4.2
300 / 4-5.6
Refractor
400 5 5026 2.7 s 10.5 7.6 6.2 2.1 x 3.2
400 mm F5
* Trail of 15 pixels of 5.2 microns at declination 0 deg. The averaging of the star scintillation needs a total
of 60 s of integration, so several images should be stacked or averaged to reach a 60 s series. Making
several series (5 or more) enables a good enough statistical analysis; it is important to optimize the
settings.
** APS-C size sensor
“FL” is the Focal Length.
“Limiting mag” is the faintest star magnitude measurable with an instrumental uncertainty of 0.05 mag in
a photometric aperture of at least 25 pixels, in one image. The overall uncertainty would be higher,
depending the sky condition.
“Sat.Mag” is the magnitude at which at least one pixel reaches 75% of the saturation level.
19
Table 4: Exposure examples for driven mount
Optics type Limitin
FL Aperture Max Sat. Mag Sat.Mag FOV**
Driven F/Stop g
mm Size mm² Exposure* ISO 400 ISO 100 deg
mount Mag
Tele
200 4 1963 60 s 13 9.9 8.5 4.24 x 6.36
200mm F4
Zoom 70-
300 / F4- 300 5.6 2254 60 s 13 10 8.6 2.8 x 4.2
5.6
Refractor
400 5 5026 60 s 14 10.9 9.5 2.1 x 3.2
400 mm F5
Newton
800 4 31416 60 s 16 12.9 11.5 1 x 1.6
800 mm F4
Notes as in Table 3
The DSLR camera offers a variety of file formats. The one required for photometry is RAW, which
records directly what the sensor has detected and includes no processing or compression by the camera.
RAW requires an enormous amount of memory storage, but all of this information is necessary for
accurate photometry. While JPG is a more common format for photographers, it does not preserve the
information the universe has laboriously delivered to your camera sensor. It is recommended to avoid the
combined RAW+JPG mode that exists in many DSLR cameras. The JPG output requires a lot of work of
the processor (noise reduction, various camera internal corrections, de-Bayer, sRGB...). It uses a lot of
battery power and generates heat that increases the dark noise.
There are a number of other settings on your camera that are undesirable in photometry. Any function that
involves the camera processing the image, such as noise reduction, should be avoided. You will also want
to turn down the LCD brightness (even switch it off) to maintain your night vision and your battery life.
The authors of this guide cannot know all the settings that may be available on your camera, but when in
doubt, choose the one that sounds like it will not do anything fancy.
Bayer Red, Green, and Blue filters (actually RGGB pixels) are made from pigments deposited on the top
surface of the pixels of the CMOS Silicon Detector, and cannot be cleaned or removed. They form a
checkerboard pattern of red, green, and blue filters that is placed on top of the sensor (see Figure 2.3).
20
Each pixel is therefore sensitive only to its own color of light. A microlens array (cemented onto the
detector) focuses the light falling on each pixel into the most sensitive part of it, improving the filling
factor of the pixel to a level approaching 100%. At a short distance in front of the sensor itself is a stack
of filters that performs several functions, including:
● IR dye that reduces excess sensitivity to deep red and Infrared light - cannot be removed from a
modern camera without removing all the functions of the filter stack
● IR cut (dielectric filter) that eliminates Infrared light above 700 nm
● UV cut (dielectric filter) that eliminates Ultraviolet light below 400 nm
● Anti-Moiré that reduces the texture effect due to the Bayer structure (a low-pass spatial filter,
slightly reduces the resolution, and reduces the undersampling issue in photometry)
Figure 2.8 DSLR G spectral response versus (a) Johnson V and (b) Tycho VT pass-band definition
(courtesy Kloppenborg et al., JAAVSO, 40, 815 (2012)).
The wavelength response of the combined G+G channels of the DSLR is not far from the Johnson V
standard definition and delivers a good V magnitude after a classical transformation. The VSF technique
(V synthetic filter; R. Pieri, JAAVSO, 40, 2, 834 (2012)) combining the RGGB channels works
somewhat better. The B channel signal can be transformed to Johnson B for most stellar types, but this
remains a subject for experimentation. The R channel is too far from the Cousins R (Rc) to be
transformable. Since the typical DSLR blue and green response can be effectively transformed to the
21
Johnson system, it should not be necessary to add additional filters to the light path of an unmodified
DSLR.
The filter stack of many DSLR cameras can be removed (at risk) by specialists. Doing so increases the
red and near-IR response (Halpha, etc.). The total removal of the filter stack is interesting for
spectroscopy. Then classical imaging needs the IR and UV cut functions to be replaced by a similar
external filter (these functions are removed with the IR dye). That external IR and UV filter is also be
needed for V and B (and possibly R) photometry. The IR dye removal improves somewhat the red G
response and provides a better V with a reduced transformation. It may help to achieve a Cousins R value,
but this has to be confirmed. B is unchanged. The elimination of the anti-Moiré (unavoidable when
removing the IR dye) makes the Bayer's undersampling issue more critical (more defocusing is needed).
With the filter stack not removed, the addition of a Johnson V filter in front of the lens, either using
R+G+B or G+G output alone, still requires a large transformation and reduces the SNR. Large errors for
stellar spectral types K and M remain after transformation. Therefore, this addition is not recommended.
Figure 2.9 shows spectral wavelength response for an unmodified Canon 450D which red, green, and blue
channels are combined with Johnson V.
The addition of a photographic Y50 filter (Blue cut) to an unmodified DSLR and using the VSF technique
at G+0.05 x R yields a very good V without decreasing the SNR too much. It provides the best results of
all techniques for the spectral types K and M.
In conclusion, unmodified DSLR without additional filters is recommended for regular V photometry. IR
dye removal could be interesting for specific projects.
22
Figure 2.9. Spectral wavelength response for an unmodified Canon 450D which red, green, and blue
filters are combined with Johnson V (courtesy Roger Pieri).
23
Chapter 3: Software Overview
With the exception of your imaging device, computers and software are the most important things in
DSLR photometry. Many aspects of planning observations, acquiring and calibrating images, and
measuring, analysing and reporting results are aided by the use of appropriate software. There are a
number of free and commercial options available, with new offerings coming to the market occasionally.
Some perform multiple tasks while others are more specialised. No one package does everything so you
will probably end up using a small suite of programs, each dedicated to a specific task within your
workflow.
Because software is constantly changing, this manual does not provide guidance for any single software
package. Instead, we provide a high-level overview of the features you need, probably want, and might be
able to use within a photometric reduction suite. If you are in need of step-by-step tutorials for a particular
software package, please see the DSLR photometry tutorials on the AAVSO’s website.
As described in the previous chapter, in order to extract accurate photometric measurements from your
images, it is imperative that the raw data values of the camera remain unaltered by any built-in
processing. Consequently, your photometry software must be able to read and manipulate the RAW
format which your camera produces. There is no universal RAW format: Canon uses CRW and CR2 files
whereas Nikon uses NEF files. Other camera manufacturers use something different.
When shopping for software (or a new camera), keep in mind that when a new camera is released, it may
take several weeks to months before processing and photometry software is updated to read the new
RAW format. You should verify that support for your camera is present by consulting the software
publisher’s website.
As will be explained in the next chapter, a series of calibration images must be taken in addition to your
science images. These bias, flat, and dark images characterize constant offsets, unequal illumination
caused by your optics, and hot pixels (or other non-linearities) in your camera’s detector. To obtain an
accurate estimate of the intensity of the stars, these effects must be removed. Thus, your software must
not only read and display the images, but also be able to apply these calibration frames to your science
images.
24
3.1.3 Extraction of individual color channels
As described in the previous chapter, the Bayer color filter array on DSLR sensors allows red, green and
blue information to be recorded simultaneously in the same image. Each color is said to be in a separate
channel or plane. Your photometry software must be able to separate RAW images into individual red,
green, and blue images. There are actually two green channels and some software, e.g. AIP4Win,
combines them into one image. Other software, e.g. MaxIm DL, treats each green channel separately. At
present most software only extracts one color channel at a time, so it may be necessary to repeat the
extraction process if all three colors are of interest.
Many of the popular photometry programs include the capability to extract the color channels from the
RAW image file (e.g. MuniWin, IRIS, AIP4Win, MaximDL). With these, you can use a single program
to extract the Green channel, perform image calibration, and perform photometric analysis. A few
popular photometry programs do not handle DSLR RAW image files (e.g. MPO Canopus, VPHOT), or
do not have the capability to extract individual color channels from your raw image. If you like the
photometry tools in one of these, then you’ll first have extract the Green channels and convert the single-
color image to the FITS format that MPO Canopus and VPHOT recognize.
Most programs produce extracted color images that are smaller than the RAW image (e.g. 5200 x 3460
pixel RAW image will result in a 2600 x 1730 pixel extracted green image). AIP4Win, however,
interpolates how much red, green and blue light would have fallen on each pixel in the image. It does this
by looking at, for instance, the surrounding green pixels and interpolating how much green light should
have fallen on the red and blue pixels. Thus extracted images are the same size as the RAW image.
Several interpolation methods are available and it is important to select the bi-linear option for greatest
accuracy.
Note: Depending on which software you use, color channels may need to be extracted prior to calibration.
It is very important not to mix the calibration frames for different color planes.
Photometric analysis is the measurement of the number of photons coming from a star that hit the
detector. Each program has its own specific method for taking this measurement but they all generally
require the user to select the radius (in pixels) of a circle to measure around the target and comparison
stars. Each program has its own commands for taking this measurement, but they all use a “measuring
aperture” and a “sky aperture”. The measuring aperture is a small circular (or square) region surrounding
the star. The software will count up the total signal within the measuring aperture. This total will include
photons from the star, plus photons from sky-glow. The “sky aperture” is usually an annulus that
surrounds the measuring aperture and contains no stars. The software uses the signal measured in the
“sky aperture” to subtract the sky-glow from the star-signal within in the measuring aperture. This
procedure is called “aperture photometry”. Many programs allow this procedure to be batch processed
(see topics below on batch processing and scripting) which will greatly simplify and speed up the analysis
if multiple images are involved.
25
3.2 Useful software features
The following are some additional features found in some photometry software which you’ll find make
image processing much more efficient. None are required, but they will make your work easier.
To remove the drudgery of manually processing each image individually, most DSLR observers will
quickly want to process whole batches of images in one go. Depending on your acquisition technique and
target star properties, you might want to take dozens or hundreds of images of the same field. And you
will also have to take multiple calibration frames as well. Processing that many files one by one will
quickly ruin all the fun in DSLR photometry, so what you will want is batch processing: performing some
image processing operation on a batch of files.
3.2.2 Scriptability
Even better than batch processing, scripting allows you to combine several operations in a configurable
workflow. Some software packages define a simple ‘programming language’ to let the user write scripts
(e.g. IRIS), others use a Graphical User Interface (GUI) to define the workflow interactively and then
apply it to sets of files (e.g. Fitswork). This is an advanced feature that is only offered by some software
packages, especially those that are also used by professional astronomers. Beginners should not worry too
much about scripting and work out their workflows manually first, but experienced observers will find
this feature very helpful to boost productivity and avoid the frustration of performing some trivial tasks
over and over again. When initially selecting a software package, you may wish to make sure you have
the possibility to later use scripting, although initially you will likely not use it while learning.
An easy way to improve the Signal to Noise Ratio (SNR) of your images and/or reach fainter targets is to
align and stack (e.g. add together or average) images. Many photometry software packages can align and
stack photos although the step by step procedure will be slightly different. In general the software will
have to first register each image by identifying several stars common to each image. In the alignment
phase the images are then rotated and moved to ensure the registered stars in subsequent images are
aligned. The stacking phase then calculates the median or average values of each pixel from the images in
the stack. The final image is the result of these stacked pixel values.
The noise portion of the content of each pixel is not constant but fluctuates around a mean value and may
change from one image to the next. By stacking images, the signal to noise ratio tends to improve. This is
because addition of several measurements results in both the signal and the noise increasing in absolute
terms, but the noise, being random, increases more slowly than the signal. For regions in the stacked
image with no stars the result will be pixel values close to a constant sky background level (close to zero
for short exposures from a dark site) and with reduced scatter compared with the individual images. In the
case of stars the pixels will not change much from one image to another so the result of the alignment and
stacking process will reduce the noise while leaving any stars mostly unchanged.
26
Because each program has a specific set of steps to go through when aligning and stacking images (and
because new versions of the software may have a slightly different procedure) the specific steps have not
been included in this manual but examples can be found on the DSLR section of the AAVSO website.
Image acquisition can be controlled by software when the camera is connected to a computer via USB
cable (normally used for downloading images from the camera’s memory card). Canon provides the EOS
Utility program with their DSLR’s. Other camera manufacturers should provide similar software, either
free or at an additional cost. Third-party software is also available, including Backyard EOS and MaxIm
DL, among others.
Such software greatly facilitates framing of the target, setting an appropriate amount of defocus and
exposure duration. You can quickly check framing of target and comparison stars by acquiring an image
and displaying it on the computer. If necessary, camera pointing can be adjusted before the science
images are captured. The image can also be measured to ensure stars of interest are not over- or under-
exposed, and exposure duration adjusted accordingly.
Autofocus will not work on the night time sky and should be turned off. In fact, for photometry the image
needs to be slightly defocused (see Image Acquisition chapter). Setting the lens to the infinity (∞) marker
is unlikely to be suitable either, especially if you are using a zoom lens. Manual focusing can be
particularly time consuming and frustrating so software control is desirable. Backyard EOS is one
program that does this with Canon electronic lenses. Other software may be available for specific
cameras.
Backyard EOS also automates image acquisition, as do other programs. This is particularly useful when
multiple images of one field are required for later stacking or to record relatively rapidly varying stars,
e.g. eclipsing binaries. The software can be programmed to obtain a set number of images at specified
time intervals.
MaxIm DL is a powerful acquisition and analysis package popular with CCD and DSLR imagers alike.
However, unlike most other acquisition software, MaxIm DL saves images in FITS format [see section
3.2.6], not the camera’s native RAW format. This is not a problem as FITS is the usual input file format
for photometry software.
Plate solving is the process of automatically identifying the stars detectable in an image, by cross
referencing with a star catalog. If you have prepared your observation session by looking at finder charts
first (as you should), you will soon learn how to identify the target and comparison stars manually
without the help of automatic plate solving. But for some advanced techniques like automatic photometry,
or when you think you notice a change of brightness in one of the stars in your field that might not even
be part of your original observation, plate solving can be useful. Some advanced packages like MPO
Canopus (http://www.minorplanetobserver.com/MPOSoftware/MPOCanopus.htm) even use this to
27
automatically identify variable stars (or asteroids etc). A web-based solution is astrometry.net, which also
offers standalone (Linux) software which you may download and use locally.
The “Flexible Image Transport System” (FITS) is an open standard for images (and some other
astronomical data sets like tabular information) and is very popular in the astronomy community. It
allows lossless storage (the stored file contain all the information that was present in the original RAW
image file) which is essential for photometry work. Recall that JPG is a compressed file format, and it is
not lossless. Because FITS is supported by practically all serious astronomy software, it is a very good
choice when you want to exchange image data between different software packages. Another big
advantage of the FITS format is that it allows storage of image metadata (e.g. time of observation,
observation location, duration of exposure, field coordinates in the sky.....) in a standardized way that
software can understand. Also, for archiving your images, FITS is the best choice. There are, however,
several sub-formats of FITS and you might have to experiment a bit to find a common subformat
supported by all of your favorite software.
As will be explained in greater detail in the next chapter, differential extinction (scattering experienced by
starlight as it passes through the atmosphere) and transformation corrections (to make DSLR green
conform to a standard astronomical V filter, etc) are often performed during the reduction stage of
analysis. Most photometry software do not perform this task; however, a few programs do (e.g. the
AAVSO’s VPHOT). If you are going to use this advanced step in your data analysis, you can either use a
program (such as VPHOT or MPO Canopus) that includes it, or use spreadsheets (like those available
from the AAVSO’s DSLR section).
Observations should be submitted to the AAVSO International Database via the WebObs site
(http://www.aavso.org/webobs). Several photometry packages (e.g. AIP4Win, MaxIm DL, VPHOT and
MPO Canopus) can generate suitable text file reports.
Manually setting the camera date and time with reference to a radio time signal at the beginning of the
observing session is usually sufficient when observing longer period variables. In other situations
28
accurate time stamping of images is important, e.g. time series observations of eclipsing binary stars to
determine the precise time of minimum light. Canon cameras, and presumably others, can be configured
to synchronize with the computer clock when attached via USB cable. The computer clock can be
automatically synchronized at regular intervals with an internet time server. Many modern operating
systems automatically perform this task; however, dedicated software can be used such as Dimension 4
(http://www.thinkman.com/). Software control of the DSLR (see below) allows a convenient way to
ensure the camera clock is correctly set prior to acquiring each image.
29
3.4 Software capability comparison chart
The most common software solutions used for Variable Star observing are Windows or Linux based.
Four common photometry software package are compared below. Note: There are several versions of
MaxIm DL available. In order to do DSLR photometry you will need the MaxIm DL Pro version.
Features and prices were applicable in early 2013.
Plate Solving x x x
Report Generation x x
Notes:
1) limited formats supported, contact the author regarding your camera (Muniwin website)
2) The $99 cost includes the book The Handbook of Astronomical Image Processing.
3) Only the MaxIm DL Pro and Suite will have all the features required for DSLR photometry.
4) http://www.astrosurf.com/buil/us/iris/iris.htm
5) http://c-munipack.sourceforge.net/
6) http://www.willbell.com/aip/Index.htm
7) http://www.cyanogen.com/maxim_main.php
30
3.5 Other useful software
3.5.1 Star Charts and Planetarium Software
Printed star charts can be used for locating the region of sky to be imaged. Printed does not necessarily
imply paper charts, they may be stored as files on a digital device. Charts which specifically allow finding
target variable stars may be generated online at http://www.aavso.org/vsp. This “Variable Star Plotter”
page, Figure 3.1, can generate charts in several sizes by name of the variable star. The field of view of a
DSLR camera with a typical lens is usually well represented by a “B” size chart and the orientation by a
“CCD” orientation.
The resulting chart has the variable star plotted in the middle, Figure 3.2. When setting up and focusing,
these charts are useful to verify that the variable is well placed in your images. The magnitudes of some
adjacent stars are also indicated, and you should ensure that some of similar magnitude to your target star
are included in the field of view, so that they can be used as comparison stars in the photometric
31
reduction. The magnitudes are labeled without decimal points (which may be confused with the symbol
for a faint star), so a magnitude 7.1 star would be labeled 71.
Comparison star magnitudes on VSP charts are given to one decimal place only. This is generally fine for
visual observers but not adequate for DSLR analysis. Select the “Photometry Table” option on VSP to
produce a detailed list of comparison stars for the field. Magnitudes and estimated error are given to 3
decimal places.
Figure 3.2 Variable Star Plotter chart of the field around the variable U Cep showing comparison star
magnitudes to one decimal place (without the decimal point)
If you are using a well aligned equatorial mount, the coordinates of the star supplied on the chart will help
you to move quickly to the proper star field.
32
If you are using simply a tripod, a chart showing more of the sky may be useful to you in aiming your
camera. Paper charts showing large areas of the sky, or sky atlases, can be used for this. However,
planetarium software is more convenient because the displayed chart can be resized and oriented to match
your imaging system and targets easily searched for and centered. Many planetarium packages can also
control a telescope mount (see “Telescope and/or mount control” below). There are numerous free and
commercial options available such as Stellarium, Cartes du Ciel, and The Sky. Some planetarium
software for mobile devices can detect the direction you are pointing toward and adjust the view
automatically to show the stars in that direction, which can be very convenient.
One point to bear in mind when using software is that the variable star may be shown at a different
brightness than you will see it on your observing night, specifically because it is variable!
Many telescope mounts with “GoTo” capabilities can be controlled using software on your computer.
These type of mounts often come with drivers or communication protocols that are understood by
planetaria software, like Stellarium or TheSky. There are at least two major advantages to mount control
from software. One is that a “target” may be easily located in the first place (provided it is visible in the
sky at the time). A second is that a tracking mount will allow a camera to stay pointing at the same target,
to compensate for the rotation of the Earth. This allows longer exposures and permits fainter stars to be
detected. Ideally, the camera should be mounted on an equatorial mount, but many “go to” mounts are
altazimuth mounts which are easier to set up and are readily controlled by modern computers (which may
be inside the mount) to follow the stars. Strictly speaking, use of an altazimuth mount (without an
expensive camera rotator) causes the image to rotate slightly. Most software that deal with sequences of
images can compensate for this, and for short exposures it is not a serious issue within individual images.
33
Chapter 4. Image Acquisition
In this chapter we dive into the details of the preparatory work you should do before snapping your first
data set, how to take calibration frames, how to find your starfield in a tiny viewfinder, how to acquire
images and assess their quality, and finally some tricks of the trade from experienced DSLR
photometrists.
Perhaps one of the most important aspects of doing science is keeping good records of what you have
done. This may sound like an overly simplified concept, but a logbook of your observing setup and
sessions will not only help you identify problems with your data or observing procedures, but also let
other experimenters duplicate your experiment should it need to be done.
At a minimum, your records should indicate the date and time of your images, the targets on which
science data are being taken, the weather conditions, and anything that goes wrong during your observing
session. It is also a good idea to periodically note the temperature, humidity, and sky conditions as these
can alter the quality of your images. Don’t forget to note anything unusual about the session or your
equipment. Is your neighbor’s garage light on tonight when it wasn’t on last night? Did you run out of
power halfway through an imaging session and change batteries?
As with any observing session, most of the work is done in the dark. You should find a location from
which to observe that is free from obstructions both in the sky and on the ground. Whether you are using
a tripod or a telescope mount, familiarize yourself with the location and operation of its controls and
features which might be useful. For example, how do your tripod’s legs extend? How does the leg bracing
lock? How do the stops/breaks work on the head? Does the head feature a quick release platform? Try
attaching your camera to the mount in the daylight and reaching extreme locations (e.g. zenith) to verify
that nothing interferes with pointing, could get tangled, or unintentionally damaged during your session.
Concerning your camera, you should be able to find and use all of the following controls:
● Focus and zoom rings
34
● Manual focus (e.g. turn off auto focus)
● Image stabilization switch (turn to off)
● Exposure time
● F/stop
● ISO setting
● Image save type (set to RAW)
Perhaps one of the most obscure “gotchas” in DSLR photometry happens when the camera either loses
power or the battery gets too low. Some observers in the past reported that their DSLRs background noise
increased dramatically as the battery charge decreased or after the battery was changed. This does not
appear to be an issue with newer cameras, but is something to keep in mind if you are using equipment
more than a few years old. If you plan on doing long observing sessions (i.e. near the length of time that
your battery lasts), it would be advisable to use external power or have a second battery if external power
is not practical for your observing location.
Locating a variable star and its comparison stars without a good-quality finder chart is often an exercise in
futility, so be sure to bring one with you into the field. It is often particularly helpful to bring finder charts
which have different fields of view, especially with fields of view which are larger than that of the
camera.
A good observing session starts with a well-define plan. We suggest creating a checklist of the actions
required to obtain scientific quality images, especially if this is your first attempt at DSLR photometry.
What fields do you intend to observe? Where are the calibration stars located (finder charts help)? What
camera settings will be required? How many images are needed? These items should all be recorded in
your observing logbook.
35
Figure 4.1 Highly stretched image of an evenly illuminated light box (image by Mark Blackford).
In Figure 4.1 we can see several of the aforementioned artifacts. The circular splotches are caused by dust
on the optics, the reduced intensity in the corners is due to vignetting, and the vertical and horizontal lines
are due to pixel sensitivity variations and electronic noise. Although not obvious to the eye, these artifacts
are also present in science images and should be removed before photometry is undertaken.
To properly account for these effects, you must take a series of calibration frames and perform a number
of mathematical operations on your science frames including subtraction of bias and dark frames to
remove the fixed-component noise and division of the resulting image by a flat frame to remove the
effects of vignetting and pixel-to-pixel sensitivity variations as well as dust shadows. Details on how to
perform these operations can be found in your photometry software’s manual. This section (which could
have been a chapter in itself) provides a detailed explanation of the various artifacts these calibration steps
attempt to mitigate. For further reading, we refer the reader to the Handbook of Astronomical Image
Processing by Berry and Burnell, or similar online sources.
The easiest to understand artifact in images is random noise. Random noise is totally independent from
pixel to pixel, and from image to image. In each picture the pattern of random noise is different. The
grainy aspect of images (Figure 4.2) taken at high ISO is due to this noise which generates a positive or
negative error in our magnitude measurement.
36
There are two principal sources of random noise in DSLR images. The first is Johnson-Nyquist noise.
This noise is generated in the camera electronics and is caused by thermal agitation of electrons. This is
often referred to as “read noise.” The second source of noise is shot noise which is related to the number
of photons, N, detected and arises from the statistical nature of photon emission at the source. Shot noise
is simply the square root of the number of photons detected.
Figure 4.2 Two 120 second exposures, ISO 400, 20°C, same block of pixels from the raw images. Bright
pixels are impulses of dark current and are the same in both images. The grainy background is random
noise and is different in both images (images by Roger Pieri).
Random noise is present in calibration images as well as science images and cannot be eliminated; the
only way to reduce its impact is to increase the signal (photons) by using longer exposures, either in a
single long shot or by "stacking" (adding) several shorter images if there is a risk of saturation.
Many cameras have built-in software filters which reduce the visibility of this noise in images. Although
useful in everyday photography, the filters alter the original data in the image and should not be used in
photometry. Thus any on-camera noise reduction options should be disabled when performing
photometry.
Contrary to Johnson-Nyquist and shot noise, fixed pattern noise (FPN) is not random; it is due to
technological defects of a permanent nature. When particular pixels are affected by such defects they
form a pattern which is repeatable from image to image. Unlike random noise, FPN can be characterized
and removed during the image calibration process.
There are several types of fixed pattern noise including bias and systematic offsets, dead/hot pixels, dark
current, and dark current impulses. In the next few paragraphs we describe each of these in greater detail.
A bias is a tiny shift of the black level of each pixel, often linked to the row/column organization of the
pixels. It can be either uniform across all pixels, or form strips at the black level of images (see Figure
4.3). The amplitude is extremely low in present sensors, usually only a few ADU.
37
Note: It is to be noted that there are similar defect patterns (strips) in DSLR images that are not
repeatable from image to image, and cannot be removed by image calibration. This is usually due to
spurious signals induced by the digital electronic circuits into the highly sensitive analog electronics.
However, they are at very low ADU levels and not too much of a problem.
Some cameras have a systematic offset by design. This is a perfectly determined shift of the coding of the
black level into the image file. It's often 1024 or 2048 ADUs in modern cameras. This offset provides for
the possibility to record negative values of the noise and some black level drift. This feature is important
for photometric processing because it needs to be subtracted before any non-additive mathematical
operations, like flat field correction, are applied.
Bias and systematic offsets are present in all science and calibration images. They are removed by
subtraction of a master bias frame (discussed later in this chapter).
Figure 4.3 Highly stretched master bias frame showing fixed pattern noise with amplitude of a few of
ADUs (ISO 200). This image has both a uniform offset from 0 ADU and strips linked to row/column
organization of the addressing electronics (image by Mark Blackford).
Dead and hot pixels are pixels which are not functioning properly. Dead pixels don’t respond to light and
usually have ADU values near the systematic offset level. Hot pixels have too much dark current (see
below) and high ADU values compared with normal pixels in the image. They are defects of the sensor,
normally some are tolerated at the periphery of the sensor, but there should be none or very few in the
center area.
38
The pattern of defective pixels is repeatable from image to image and can be corrected by first recording
their coordinates in a file (called a defect map) then replacing the ADU values of these pixels in science
and calibration images with a value interpolated from surrounding normal pixels. This corrective process
is applied before any other calibration steps.
Hot pixels are detected in dark images and dead pixels in flats. The ADU threshold set by the user
determines which pixels are included. At ISO 100 a threshold of 500~1000 ADU above black level of a
dark image is a good starting point. Consult your photometry software manual for the precise method of
creating a defect map.
The defect map process is very effective, takes very little processing time and doesn't cost observation
time to prepare the file. If it is available in your photometry software it is recommended that it be used.
Defect maps can be used for several months. Its validity is limited by the aging process of the sensor.
Important note: defect replacement should only be performed if you are heavily oversampled. If a defect
occurs in a star profile, you are making assumptions as to what the proper interpolated value might be,
and those assumptions will fail if adjacent pixels differ much in intensity.
In CMOS image sensors the photodiode works under an inverse polarization mode. That means a positive
voltage is applied to the cathode relative to the anode. The current from the source is blocked. The
remaining current is due to electrons liberated by the photons falling on the photodiode. But there is
another tiny current that also exists in any diode, the inverse current, which is a kind of leakage of the
blocking mode. This signal is small, about 0.1 - 1.0 electron per second, and results in a small increase of
the output ADU level of the pixel.
39
Figure 4.4. Horizontal strips or banding can often occur in Canon DSLR images. These strips are
normally at a very low level (a couple of ADU) and are caused by noise in the digital circuits of the
sensor prior to the ADC. There are algorithms to eliminate these artifacts, but they are not common in
astronomical software. Properly applied background subtraction tends to mitigate this source of noise.
The normal inverse current is fixed by the design of the sensor and all pixels have the same positive shift
due to it. The corresponding accumulation of electrons in the pixel is proportional to the exposure time.
This results in some elevation of the global black level (more or less like the sky background). In fact this
is not visible in our images as it is compensated by the DSLR electronics. The only effect remaining is the
corresponding shot noise, which increases the random noise level of long exposures.
The inverse current of diodes is also very sensitive to the temperature of the diode. It typically doubles
every 5 to 10° C. Therefore the electron charge increase is proportional to the exposure time and an
exponential function of the temperature of the sensor. Although the CMOS sensor itself often generates
very little heat (i.e. has a low power dissipation), the camera’s processor will elevate the ambient
temperature of the camera. Typically a camera will warm by 10°C after about one hour of use. This is
much less than CCD cameras that require cooling. Thus normal dark current is less of an issue in DSLR
cameras than in CCD cameras.
40
Dark Current Impulses
DSLR astrophotography is frequently plagued by a few (~3%) deviant pixels that have significantly
higher dark current than normal. These deviant pixels appear much brighter in the image and are often
called hot pixels or “dark impulses” (e.g. the bright pixels in Figure 4.2). Dark impulses are not
measurable in very short exposures because they are just below the random noise range of most recent
DSLR cameras; however, they become an issue in longer exposures.
Although dark impulses are a truly annoying anomaly in astrophotography, they have less of an impact in
photometry where the light is (intentionally) dispersed over a few hundred pixels. Background subtraction
and stacking/averaging also reduces the impact of dark impulses.
Often overlooked, the creation of master calibration frames (which we advocate later in this chapter) also
introduces some additional random noise into the science images. To minimise this extra noise we use
master bias, dark and flat frames made from at least 16 individual frames, but the more the better. Image
signal scales linearly with the number of frames but random noise scales with the square root of the
number of frames so signal-to-noise ratio (SNR) improves as more frames are added.
Fixed pattern noise due to bias and any systematic offset are usually removed from science images by
subtracting a master bias image. The master is made by stacking a number of shots taken in absolute dark,
of very short exposure, at the ISO value used for the science images.
Bias frames can be collected at any time because sensor temperature and focus setting are not important
considerations. So cloudy nights are ideal for preparing master bias frames. Set the shutter speed to the
shortest available on your DSLR (typically 1/4000th second), ensure no light can reach the sensor (lens
cap on, viewfinder blocked, darkened room) then record at least 16 images or as many as several hundred.
Consult your photometry software manual for instructions on how to prepare the master bias from these
individual frames.
A separate master bias frame should be made for each ISO setting used for science images. They can be
used for months. The limit is the possible aging of the electronics.
41
Artificial Bias Correction
Subtraction of a master bias frame inevitably adds some amount of random noise (even when several
hundred individual bias frames are used to construct the master frame). Instead, some people subtract an
artificial image in which all pixels have the same value as the systematic offset, i.e. 1024 or 2048 ADU.
This has the effect of removing the systematic offset from science and calibration images without adding
extra random noise, but at the expense of retaining the FPN due to bias.
There are several approaches to dark correction. The choice as to which one to use will depend on the
specific characteristics of the images being calibrated and the options available in your photometry
software.
No Dark Correction
Images recorded with exposure times less than 30 seconds in cool ambient temperatures may not show
significant dark current or dark impulses. This is usually the case for flat frames where exposures are
typically only a few seconds. In this situation dark correction is not necessary and in fact would add
random noise without significantly improving photometric precision. It would be wise to check your
camera’s own characteristics under various temperature and exposure settings before adopting the no dark
correction option.
Many DSLRs have an option for in-camera long exposure noise reduction. Immediately after taking a
science image the camera automatically records another with exactly the same exposure but without
opening the shutter. The second image is subtracted from the first before saving the corrected image file
to memory card or computer. Neither the original science nor the dark images are saved.
In principle this seems like a good idea; however, in practice it is not. The camera will use one dark image
per one science image, thus the random noise added is much greater than a master dark frame. (Note: this
is mitigated somewhat if you will be stacking several science images.) More importantly, half of the
observing time is spent taking dark frames so the amount of science images is greatly reduced. The one
advantage of this in-camera process is that the temperature of both images will be very similar, but this is
not sufficient compensation for the disadvantages.
In general, in-camera long exposure noise reduction and other such options should be disabled.
In the classical process at least 16 dark images are recorded during the observing session, under the same
settings and conditions as the science images (ISO, exposure time, temperature). Any possible leak of
light into the camera must be eliminated (viewfinder covered and lens cap on). A master dark frame is
then made using these individual dark frames. Consult your photometry software for specific steps.
It is difficult to make a set of dark images with the same dark impulse level as the science images because
the sensor temperature of the DSLR is not stabilized. To mitigate this issue, some people collect half the
42
dark images before starting the science images and the other half afterwards. This tends to bracket the
temperature range science images are recorded under and can lead to improved dark correction.
You may need to use different exposure times for different targets depending on their brightness. With
classical dark correction it would be necessary to create a master dark frame for each exposure time used,
at the cost of extra time spent recording individual dark frames.
Some photometry packages have an option for scaling a long exposure master dark frame so that it can be
used for dark correction of shorter exposure science frames. This can work reasonably well as long as the
temperature is not significantly different.
A more sophisticated procedure available in several photometry packages (e.g. IRIS and MaxIm DL)
scales the master dark frame to minimize the RMS noise of the final image. This procedure can
accommodate temperature differences between the dark and science frames, even changing sensor
temperature throughout the observing session.
The master dark frame can be made at any time; no need to do it during the observing session. It should
be useful for several months; the limit is possible aging of the sensor.
Flat field frames are images of an evenly illuminated source which reveal asymmetries or artifacts in your
camera’s optical setup. Unlike dark correction, flat field correction is mandatory for all images intended
for photometry. Flat field images must be recorded with the camera and telescope/lens in the same
configuration (focus, f-stop, ISO, etc.) used for the science images. Exposure times should be adjusted to
avoid saturation.
Finding or making such an evenly illuminated source is surprisingly difficult and has lead to many, shall
we say, interesting discussions at AAVSO conferences. Thus we cannot (and dare not) advocate one
particular technique. Before presenting a few popular options, we offer a few general words of advice:
Care should be taken to ensure that each of the RGB channels receive sufficient intensity in one image.
Ideally this should be about ⅔ of the “saturation limit” of your camera. Because you will be observing a
much brighter source than when doing photometry, you will need to use shorter exposure times (typically
1-2 seconds) than on your science images.
Even though the exposures are short and there is no appreciable dark current, the bias and offset signals
are still present. Be sure to apply the master bias frame to the master flat frame before applying any flat
correction to science images.
Because flats are supposed to be images of a uniformly illuminated source, they will correct for any
vignetting and pixel-to-pixel sensitivity variations that are present (provided the camera and
telescope/lens configuration is not altered). However, dust shadows may change due to movement of dust
on the optical surfaces and changes in focus settings. To minimize this effect, disable any ultrasonic
43
cleaning options on your camera. Flat frames should be prepared regularly, but not necessarily every
night.
As with all calibration steps, flat field correction adds noise to the calibrated image. To minimize the
amount of noise added the master flat images are made from multiple flat frames. You should aim for at
least 16; more if time allows. Your photometry software will have an option for making a master flat
frame from individual frames using either average or median combine routines. The median option is
usually preferred because star images in individual sky flats or cosmic ray traces will not adversely affect
the master flat frame.
When photographing through a telescope the field of view is usually small enough that images of the
twilight sky (which is reasonably uniform on the scale of a degree or so) can be used as flat field frames.
There is limited time in which to record sky flats during evening and morning twilight, and it may be
necessary to vary the duration of each frame to ensure adequate exposure as light levels change.
If you are making sky flats, it is best to turn your telescope’s tracking off, so that any star images in your
picture will be trailed to different positions on each flat frame; the “median combine” (rather than
“averaging”) option in your photometry software will eliminate them from your master flat.
For wider fields acquired with standard or telephoto lens, indirect lighting techniques must be used.
Dome Flats
A flat target such as a piece of mat board illuminated by the twilight sky or diffuse artificial lighting can
be suitable. Make sure that the target board more than fills the entire image.
Alternatively, a light box can be constructed and placed over the front of the camera lens for acquiring
flat field images. These allow control over illumination levels and can be used at any time, instead of
having to wait for suitable twilight conditions. Instructions for light box construction are readily available
on the internet. One simple but effective design is described in the Handbook of Astronomical Image
Processing by Richard Berry and James Burnell.
In recent years electroluminescent (EL) panels have become readily available and some people have
successfully used these for flat field imaging. They are less bulky than traditional light boxes and easier to
use in the field, but can be relatively expensive.
44
4.5 ISO and exposure times
If there were a top 10 list of DSLR photometry questions, then those involving exposure times, ISO
settings, and ensuring images are of photometric quality would certainly occupy the top slots. Picking
these settings requires thoughtful consideration of both your camera’s noise characteristics and the
science objective you wish to accomplish. In this section we explain the careful tradeoff between
sensitivity and precision and provide a few guidelines for optimal settings.
Selecting the right ISO setting is choosing between two evils. As discussed in Chapter 2, the ISO setting
simply adjusts the gain setting on the amplifier used to read out the pixel values. One might expect a high
ISO setting to be ideal for photometry, but this is not always the case. At high ISO, the camera will show
fainter sources, but this will amplify not only the starlight, but also the noise. Additionally, a high ISO
will reduce the camera’s dynamic range (the range of brightness contained in an image). Thus high ISOs
limit the range of magnitude differences your camera will be able to detect.
Conversely, at low ISO values, small differences in electric charge will be assigned the same value by the
ADC, thus the precision of the detector is lost. The latter situation is called “quantization error”.
Quantization error can be easily illustrated in a non-technical fashion with the following image of a clear,
blue sky at a beach (see Figure 4.6). We know from everyday experience that the brightness of a clear sky
varies smoothly along a gradient. However, if a camera cannot detect subtle variations in brightness, it
will produce a strange-looking image in which the sky has a “stair-step” appearance, as is the case with
this image.
Figure 4.6 An otherwise smooth gradient of the blue sky in this image is split into a series of discrete
intervals due to quantization error.
This artifact is more than just ugly. In the context of DSLR photometry, it also degrades the photometric
value of the image. The beach image should use hundreds of different intensities to represent the sky, but
here, it only uses five, which is why the sky is divided into five unrealistic-looking zones. (Note: in fact,
quantization error occurs at high ISO as well, but in this case because your gain is so high that the
addition of one electron means multiple ADU steps.)
45
Through some experimentation, we have found an ISO setting of 200-400 should achieve a good balance
between precision and noise, with lower ISO values (e.g. 100) better for brighter stars. Thus if your
science topic will involve a wide range of magnitudes, you should probably stick to the low end of this
range. Likewise, if your are observing a field with many stars of similar magnitude, a higher ISO setting
may be acceptable, as long as the higher ISO setting does not saturate the stars.
With DSLR photometry, the observer must be careful to ensure that the camera’s images are of
photometric quality. There are several pitfalls which can cause even a good-looking image to be
scientifically worthless, and it is crucial that the observer be able to identify these issues in the field while
the data is being collected. One of these issues is knowing how to set an appropriate exposure in order to
avoid problems with saturation and nonlinearity, both of which are concepts that we describe in the
following paragraphs.
Understanding the concept of linearity requires a brief, minimally technical digression into how DSLRs
detect light. When light strikes a pixel in the sensor, it creates an electric charge in the pixel which is
proportional to the intensity of the light. Thus, if star A is 2 times brighter than star B, it should generate
an electric charge twice as great in the pixels that it shines on. However, there is a maximum amount of
charge that any one pixel can hold. Once a pixel reaches this limit, it cannot hold any additional charge,
so any additional light that falls upon it will not produce a corresponding increase in the charge held by
that pixel. This is called saturation. In a sense, once saturated, a pixel has become “blind” for the
remainder of the exposure and will no longer have a linear response to light. This does not harm the
camera, but it does mean that it is impossible to obtain meaningful photometry of the saturated star.
(Photometry of non-saturated stars in that image will not be affected.) In practice, then, it is absolutely
essential to ensure that neither the target star nor any of the reference stars is saturated.
Closely related to saturation is the concept of non-linearity. Normally, when light from a constant source
falls onto a pixel, there will be a direct linear relationship between the exposure time (plotted on the x-
axis) and the electric charge (the intensity, plotted on the y-axis). For example, doubling the exposure
time should double the intensity at a given pixel. However, for CCD-type detectors, a pixel approaches
saturation, the formerly linear relationship will become highly non-linear. With a nearly saturated star
image, for example, increasing the exposure time by 10% might result in only a 5% increase in charge
(rather than the expected 10%). Non-linearity is even more dangerous in photometry because it is less
obvious to detect than is saturation. Fortunately, DSLR cameras now exclusively use CMOS sensors that
do not have the problem of non-linearity that CCD sensors do.
Why should anyone care about saturation and non-linearity? Photometry rests upon an intuitive
presumption that there is a direct, linear relationship between (a) how bright a star appears in an image
and (b) its actual brightness. Once a pixel loses its linear response to light, this assumption breaks down
because the electric charges held by non-linear/saturated pixels do not correspond with the true brightness
of a star. In the accompanying figure, Star A is one magnitude brighter than Star B, but once star A
becomes saturated, the differential magnitude goes from -1 toward 0—even though neither star has varied
in true brightness. Knowing the level of intensity at which your camera’s pixels begin to become
saturated, therefore, is important.
46
The easiest way to avoid issues with saturation is to simply keep the maximum intensity for the target and
reference stars below 75% of the maximum value for your camera. If you have an older 12-bit camera, the
maximum intensity is 212 or 4096 counts, so you would need to keep the intensity below 3072 counts to
be safe. For a 14-bit camera, 12288 counts would be the cutoff. These numbers are very conservative but
allow for changes in observing conditions, such as seeing or transparency, that might push a star into
saturation.
Figure 4.7 An otherwise smooth gradient of the blue sky in this image is split into a series of discrete
intervals due to quantization error.
Picking ISO settings and exposure times can be a time consuming process. You should refer to Tables 3
and 4 in Chapter 2 for some starting guidelines, but your first few evenings of DSLR photometry may be
best spent getting a feel for the best camera settings for targets that interest you.
47
4.6 Finding and framing the field
At first this is one of the most frustrating part of the learning curve, especially if you are using a tripod.
This is also where experience conducting visual observations really pays off. The same problems you
have finding a field visually will apply to DSLR photometry. The difference is that your field of view will
be smaller. Here are a few recommendations:
● Learn to use star charts to find fields visually and/or with binoculars.
● Practice on easy-to-find and frame fields.
● Locate the nearest bright star (V < 4-5) to your target area. Use it for rough alignment.
● Looking through a camera that is pointing high in the sky is difficult for many people. Consider
purchasing a right-angle finder for the camera.
● Take one test exposure and examine it on your camera. Use your camera’s zoom-in feature to
identify asterisms which may help you with further alignment.
● Set your camera to RAW format (e.g. Nikon’s .nef or .nrw and Canon’s .cr2 or .crw)
● Verify your camera’s date and time are correct. If at all possible, set the camera to UTC rather
than leaving it on local time. If you must leave the camera on local time, make sure it is as
accurate and precise as possible (preferably to the closest second), and clearly note the difference
between the camera time and UTC in your evening's log.
● Defocus the stars slightly to the point that they are round and occupy several pixels. The stars
should be round and fully-filled. If they start to look like doughnuts, you’ve gone too far. Star
images can be quite different on either side of focus. Experiment to determine if under or over
focus is best for your lens.
● Use the live-view feature to check focus and field framing, but shut it off when not needed. The
ambient heat from the display may increase noise on the sensor, the generated light may diminish
your night vision, and may needlessly increase your power consumption, especially if you’re
using batteries.
● Take the images in a low ISO setting (typically 100 - 200). Although higher ISO levels are more
sensitive, they suffer from a loss of precision.
● Shut off any noise reduction or built-in image processing options in your camera.
● Shut off any ultrasonic / automatic optics cleaning options in your camera.
● Practice operating your camera indoors before taking it outside in the dark.
48
Chapter 5: Image Assessment, Image
Processing, and Aperture Photometry
5.1 Overview
This chapter will generically describe how to translate your science images into accurate photometry, a
calibrated measurement of the brightness of a variable star at a specific moment in time. The major steps
in the process after image acquisition are (1) checking that all of your calibration and science images are
suitable for photometry; (2) applying calibration frames, co-registering and stacking images to increase
SNR; (3) extracting the individual RGB channels from the image(s); (4) performing aperture photometry
on the target and calibration stars; and (5) performing final quality checks. Please note, that steps 2 and 3
depend on the capabilities of your photometry software and may need to be reversed.
Before we get started, we will assume that you’ve followed the instructions for acquiring images in
Chapter 4, and have a full set of calibration frames in addition to your science images. To summarize
what you should have in hand, make sure you have the following:
● A full set of bias frames (zero-second) exposures, which you will turn into a Master Bias frame
(should have at least 16 and often many more)
● A full set of dark frames, which you will turn into a Master Dark frame (10-20 per exposure time
and ISO setting)
● A full set of flat fields, which you will turn into a Master Flat field frame (5 or more)
● All of your science frames
We’ll assume that when you took your science and calibration frames, you used appropriate exposure
times that provide sufficient signal but avoid saturation of stars of interest. As part of this chapter you’ll
check that this is indeed the case, but we won’t otherwise discuss how to acquire images here. Refer to
Appendix A for a procedure to determine your best exposure times, and Appendix B for how to test your
camera for linearity prior to your first observing session. Both the assessment of exposure times, and
testing of your camera’s linearity properties should be done before you’re ready to start taking data
regularly -- they’re something you’ll likely do once for each camera that you use, and then keep notes on
the results for future observing runs. You should also do the tests outlined in Appendices C and D to
examine the noise characteristics of your camera, and to assess whether you have suitably "flat" flat
fields.
49
The image header: Before you took your images, you had selected the camera settings that you intended
to use (exposure duration, ISO setting, color balance setting, file type). Examine the header of your
reduced image and confirm that you did, indeed, get what you intended. (It is not unknown to intend to
take a 30 sec image, but in the cold, dark, and late-night inadvertently take a 3 sec exposure).
The original image format: Confirm that your original image was “RAW” format (the file extension is
usually *.CR2 for Canon cameras, and *.NEF for Nikon cameras). You cannot do useful photometry
with the compressed “JPEG” file format (*.jpg). Your image processing software may convert the RAW
file to a FITS format image; this is expected, and is a full-fidelity conversion that retains all of the
information in the original image.
Image date and time: Confirm that the timestamp on your image header appears to be correct. The raw
image should have a timestamp that accurately records the time the image was taken. Beware of errors in
setting your camera’s clock, daylight saving time, and date change at midnight. Most cameras record the
time that the shutter was triggered, i.e. the start of the image. Your image processing program may adjust
the image time, or add another keyword, so that the time recorded in the header of the calibrated image is
the mid-point of the exposure (i.e. Treduced = Tstart + 0.5*Texposure). Most astronomical image processing
and photometric analysis programs also attempt to translate the image time into UT (Universal Time)
based on the information that you’ve given the program about your time zone. It is worthwhile to double
check that this has been done correctly, at least the first few times that you use the program, to make sure
that the recorded image time in UT is correct. Most astronomical image processing programs also
calculate the Julian Date that corresponds to the mid-point of the image. This is the preferred time for
reporting your photometry and submitting your data to AAVSO. Again, it is worthwhile to check that
your program is doing this correctly the first few times you use the program, or if you change any time-
related settings in the software or in your camera.
The reasons for each of these calibration frames are explained in the previous chapter. Your image
processing software will have some built-in method for applying the calibration frames to your science
frames. In simple language, both bias and dark frames are subtracted from an image (because the effects
of bias and dark current are added background in a signal), and so the software will subtract the counts in
each pixel in a bias or dark frame from the corresponding pixel in the frame to which the correction is
50
being applied. Flat fielding on the other hand is a multiplicative correction, because differences in field
illumination cause a percentage of the mean flux to be transmitted per unit time, and the percentage varies
with position in the focal plane. The software will normalize the flat field so that the mean pixel value is
1.000, and then divide each science frame pixel value by the corresponding flat field normalized value.
As an example, if a given pixel in a flat field is 97% of the mean value, you divide that pixel in the
science frame by 0.97. Again, your software should do all of this behind the scenes; typically you will
only need to tell the software the names of the bias, dark, and flat frames, and then follow whatever
instructions are provided by your software to apply each correction.
For most DSLR photometry projects, the target stars are of sufficient brightness that they easily register
on each exposure; however, in some cases (e.g. for faint sources) it may be necessary to first align and
then stack (coadd) your images to increase the effective SNR of each source. Most modern photometry
software has some functionality to perform these operations (almost) automatically.
If you do stack your images, be sure to examine the resulting images critically. Verify that the images are
correctly aligned before stacking. After stacking, examine the header of the image and verify that the time
makes sense. Ideally it will be automatically adjusted to the mean time of the group of images.
Binning
Like stacking, binning is an optional procedure. Binning combines the signal in several adjacent pixels to
create an image that is smaller in size, but with slightly higher SNR in each pixel. Most photometry
software has this functionality built-in, but not all software properly accounts for the Bayer array nature
of DSLR data. For example, when AIP4Win 2.4.0 extracts the green channel from DSLR data, the red
and blue channels can either be scaled by multiplicative factors (for color balance in imaging) or set to
zero (which simply interpolates across the red and blue pixels using the green values). You should check
your software’s documentation prior to binning, and understand what it is doing to avoid unwanted
behavior.
The process of stripping the green pixels away from the red and blue pixels is sometimes called “de-
Bayering” (since it unravels the Bayer mask and selects pixels from only one color). Many modern
photometry software packages are able to extract the green channel from RAW images, although they do
this procedure differently. For example, AIP4Win can be set to extract both green channels and present
them as a unified image of the same size as the initial image, interpolating between pixels. Conversely,
51
MaximDL requires you to specify which elements of the Bayer array you wish to extract. The best
procedure is to extract both green channels, add them together, and perform photometry on the resulting
image. Be sure to verify that the target and comp star are not saturating in the original or resulting image.
You can de-Bayer your images before or after image calibration. It does not matter which order you
choose so long as all data (calibration frames and science frames) are treated identically.
The signal level and signal-to-noise ratio of your stars: The images of your target, comparison, and
check stars must be bright enough to present a good signal-to-noise ratio, but not too bright – they must
fall below the saturation point of your camera. Place your photometric measuring aperture over each star
(target, comp, and check) in turn, and examine two parameters: the peak-pixel value, and the signal-to-
noise ratio. The peak pixel value must be below the saturation point of your camera. If your star images
are saturated, then your only recourse is to re-take the images, after making an adjustment to reduce the
peak pixel value. (Possible adjustments include using a shorter exposure, or using a slight intentional de-
focus to make the star profile larger). By the way, this requirement of staying within the saturation limit
of your camera’s chip is one of the most profound differences between taking images for “pretty pictures”
of celestial objects and taking images for scientific measurements: the science images will generally
appear bland and washed-out compared to the pretty pictures (which generally saturate the stars in order
to make the scene more visually pleasing).
The size and shape of star images: Your image processing program will be able to show you the
intensity-profile of your star images (as a graph). You want them to be not too narrow and not too wide.
The basic measure of the width of a star profile is the Full Width at Half Maximum (FWHM) value. The
FWHM of the stars on your RAW image (before calibration and de-Bayering) should be no less than
about 10-12 pixels. The reason for this is to ensure that your star image is well-sampled. If a star image
is too narrow, then the resulting photometry can be adversely affected by geometric sampling effects. For
example, imagine that the star image covers only 1 pixel: if the star sits on a green pixel, then you will
see a certain ADU signal; if the star moves to a red pixel, (for example, due to telescope tracking error)
then you will see different ADU signal on the RAW image; and (depending on how your software does
de-Bayering) the star may disappear completely on the de-Bayered image. With a sufficiently-wide star
image, the star covers many pixels, and thus the summed ADU count over the entire star will not change
as the star translates across the image sensor.
Can your star images be too wide? In general, stars much larger than about 30 pixels may be difficult for
your photometry program to handle. Also, as star images become broader, there may be a higher risk that
52
light from one star spreads into, and corrupts the brightness estimate of, its neighbors. So, check the
FWHM of your target, comp, and check stars, and confirm that they are large enough to be well-sampled,
and yet small enough to reliably place your photometric measuring aperture around the star, to collect all
of its light. Pick a photometric measuring aperture that is sized for your stars. Your photometric software
may have a tool to test the effect of adjusting the aperture size on both the measured flux and signal to
noise (for example, AIP4Win’s MMT photometry tool). To start, you can set the diameter ≈ 2.5-3 times
the FWHM to begin doing reasonable photometry quickly, but note that for detailed work, there is some
science to selecting the optimum measuring aperture. See Section 5.6.2 below.
Background star blending: Your photometric analysis will use aperture photometry, which adds up all of
the ADU counts within a circular measuring aperture that contains your star image. Obviously, if there is
a background star that is so close to your target (or comp or check) star that it lies wholly or partly within
the measuring aperture, the light from the background star will corrupt your photometry. So, critically
examine the region close to your target, comp and check stars for any background stars - even quite faint
ones. Note the location of any potentially-interfering background stars, and try to select a measuring
aperture diameter that will exclude them.
It is also a useful effort to examine a star-chart (or planetarium program) of your image field to see if
there are any potentially-interfering background stars within about 5 magnitudes of the brightness of your
target, comp or check stars. You may not be able to see them on your image, but if one is there, it will
add light into your measuring aperture. The preferred approach to dealing with these is to keep them out
of your measuring aperture. If that isn’t practical, then note in your report the existence of the
background star within your measuring aperture.
The problem of background stars will be a bit more likely in the case of intentionally de-focused images
(which is a ‘best practice’ to get sufficiently wide FWHM), and in the case of unguided images (which
give little star-trails instead of star-image-dots). If a background star is separated from your target (or
comp or check) star, but it trails into the measuring aperture, you may be able to avoid that by using
shorter exposures and stacking them after calibration (to recover the signal-to-noise ratio that was lost by
using a short exposure).
Uniformity of the background: Examine the entire reduced image for two subjective quality aspects:
flatness and cirrus. If you use your image-processing program to stretch the image to highlight very
minor brightness differences, do you see evidence of dust donuts (the rings that show up on your image
due to dust on preceding optical surfaces) or significant uncorrected vignetting (which would indicate that
something went awry with your flat-fielding)? If this effect is apparent, and the ADU variation is greater
than a few percent of your target/comp/check star peak pixel ADU count, then you should probably
investigate the reason, and re-do your flat fielding.
The other image non-uniformity to look for is in the sky itself. Thin cirrus clouds and aircraft contrails
that were not visible to your naked eye may show up as a pattern of changing sky-glow and transparency
across your image. This effect is more likely to be seen, and be an issue, in wide-field images, such as
those taken using standard camera lenses (e.g. focal lengths of less than a few hundred mm). With
narrow-field images taken through a telescope, the FOV is likely to be so narrow that there is negligible
53
variation in sky glow and extinction across the image. If the problem is a contrail, and it doesn’t come
near any of the stars that are important to you (i.e. your target, comp, or check stars), then you can ignore
it. If the problem is definitely thin cirrus, then anticipate some related fluctuations in your photometry;
depending on the target and your project, the evidence of cirrus might require that you be aware of the
effect and critically examine your resulting photometry in the light of the (now-known) inconstant sky
conditions) or – as a worst-case – set aside your images and try again the next night.
How many images should you examine for these features that make the image usable for photometry?
That depends to some degree on your observing program. If you are studying a star whose brightness
changes very slowly (say a Mira star whose characteristic fluctuation time is several months), then you
may be taking only a couple of images at one time during the night. In that case, critically examine only
one image. At the other extreme, suppose that you are studying an eclipsing binary whose period is a few
hours. Then, you will be taking images every few minutes, all night long. During an all-night imaging
session, all sorts of things can change in addition to your target star’s brightness. So, select three images
to critically examine – one near the beginning of the evening, one near the middle, and one near the end
of the observing session. Your critical evaluation will show you if the full set of images is OK, and – if
something changed dramatically during the night – will give you some clues about what happened and
why, so that you can take preventive actions the next night. (For example, if your stars go out-of-focus
over the course of the night, then your lens focus may be changing with pointing direction or with
temperature).
On your first few nights, and first few projects, by doing this critical evaluation of your images you will
learn quite a bit about your camera and the settings and imaging choices that are most appropriate for
your target star(s) and project. You should keep a notebook with the camera settings, lens used, and other
factors; and notes on the resulting image quality. In short order, you will be able to zero in on the best set
of parameters (especially exposure time) for each target, based on the target magnitude (and comp and
check star magnitudes), the lens or telescope to be used, and typical conditions at your observing site.
54
your software will probably do this for you once you tell it where to center the aperture, and how wide
you want the apertures to be. This entire process is what is known as aperture photometry, and is by far
the simplest and most common means of performing photometry in uncrowded fields.
Note: The aperture size that you select should be the same for all the the stars that you are measuring in
your image. A good method of selecting a proper aperture size is to use your software to graphically
determine the size of your largest star image and set the aperture size as noted above to encompass an
area large enough to include the area where the star blends into the sky background. It is best to
graphically view a profile of your target star to determine the star’s width and not just use the displayed
image because the display may be modified to give a more pleasing image on the computer screen but
give a false impression of the actual width of the star image. See below for a graphical star profile.
If your software cannot produce a graphical star profile, you may be able to determine the size of the star
image by having the software calculate its Full Width Half Maximum (FWHM). As mentioned above, a
good rule of thumb is to set your aperture diameter equal to 2.5 to 3 times the FWHM of your largest star
image. It is better to err on the larger size if you are unsure what aperture size to use especially if you
take your images from a non-tracking mount such as a simple camera tripod. You can do a fairly
straightfoward test of what your optimal aperture should be by repeating measures of a set of stars from a
single image using a variety of apertures. The largest aperture you should use is the one where you see no
noticeable increase in flux and the SNR is maximum. (If you use larger apertures, you gain no additional
flux, but do gain the noise present in the extra pixels.)
55
5.6.2 Selecting the annulus size and position
The annulus is used to determine the value of the sky background. There is some freedom that can be
used in its placement and size but a few rules of thumb are useful. Since the sky background will be
calculated from an average of a number of pixels in the annulus, a significant number of pixels should be
included in the annulus. What is significant? At a minimum it should contain the same number of pixels
as the star aperture and preferably more. You can do this by increasing the size (diameter) and/or
thickness of the annulus. Where possible, you should also try to avoid having too many background stars
contained in the annulus. Most good photometric software will compensate for them but best practice is
to avoid them if possible.
Placement of the inside radius of the annulus is usually a few pixels outside of the edge of the star
aperture. You can vary this placement to avoid having too many background stars in the annulus as long
as you have a sufficient number of pixels in the annulus for the software to calculate a good average value
for the sky background and a minimum of an equal number of pixels as the star aperture.
Your camera can only record incoming light accurately over a specified range, limited by the amount of
charge the pixels can collect before they saturate. Appendix A outlines a procedure by which you can
approximately determine the saturation limits for your DSLR, and in so doing let you establish guidelines
for exposure time settings for a range of target magnitudes. Even if you’ve established a set of exposure
time guidelines, it is good procedure to verify that your science images do not suffer from saturation.
The simplest way to check for saturation of your star images is done by measuring each star with the
aperture discussed above, and doing both of the following steps. First, plot a radial profile of the star
image, and if possible, see if it appears flat-topped rather than rounded. This isn’t always possible, but
56
certainly can be obvious when stars are badly saturated. Second, examine the pixel values within the
radial profile. The pixel values should absolutely be below the saturation limit, but should also be below
the linearity limit as well. The generation of such plots was discussed above.
All modern image processing and photometry programs will allow you select one or more targets, one or
more comps, and one or more check stars in each image, and will perform all of the relevant calculations.
The results of aperture photometry are calibrated but not zero-pointed measurements of the brightnesses
of each object in the field for which a statistically significant detection was made; the number that comes
out is simply a count of how many ADU were generated by incoming photons. The steps outlined in
5.6.1 and 5.6.2 let your software measure that information from your science images and turn it into
something more useful.
What you need to do at this point is to measure (a) the variable star itself, and (b) two or more additional
constant stars in the field that serve both to calibrate the amount of light from the variable and to check
how stable your measurements are. The AAVSO has pre-defined comparison stars for many variable
stars, so selecting which stars to measure should be simple if you have a finder chart with comparison
stars labeled. By the end of this process, you’ll have a table for each image that you measure containing a
list of stars measured. It is likely that your photometric software will do one additional step, which is to
convert the units of the measurement from linear units (ADU) to logarithmic units called instrumental
magnitudes. The reason for this is that magnitude units are the traditional unit of stellar brightness, and
nearly all optical variable star data are expressed in magnitudes. And the qualifier instrumental is used
because the magnitude is relative to some instrumental calibration that you don’t (yet) know at this step.
Again, how this step is done may depend on your software, so review the software documentation to see
whether star measures are tabulated as fluxes, instrumental magnitudes, or both.
These star measures -- along with the time you took the image -- give you all the information you’ll
eventually need to turn your instrumental magnitudes into an observation that you submit to the AAVSO,
but you have one more step to go through. The physically important number is how bright the objects in
the field are in some common measurement system. In Section 5.7, we explain how to go from these
instrumental magnitudes to a true magnitude that you can submit to the AAVSO and compare to data by
other observers by performing differential photometry.
Addendum:
There are other ways to perform photometry than aperture photometry. Two that you might hear
about are “point-spread fitting” and "image subtraction", both of which are rarely included in
commercial photometry analysis packages, but are used in the professional community. For
example, you may come across mentions of photometry being performed with a package called
"DAOPhot" in your readings. This is a very powerful (but very complicated) PSF-fitting
packaged developed two decades ago at the Dominion Astrophysical Observatory. The benefits
of methods like these are that they work in crowded fields where the images of your target star
may be blended with nearby stars, or where it is difficult or impossible to measure the sky
57
background without finding a faint star nearby. Again, both of these methods are beyond the
scope of this manual, but as you become more experienced, you may want to investigate them on
your own. At first, aperture photometry will work admirably well on nearly all variables you will
observe.
To determine the brightness of your target variable star, we compare its brightness with the known
brightnesses of hopefully non-variable stars in your image. Essentially, the sum of the pixel values in the
star image of your target target variable star (var) is proportional to the sum of the pixel values in a star of
known brightness, the comparison star (comp). In addition, as a check of the quality of the values we are
deriving for our comparison star and sometimes to verify the non-variability of the comparison star, we
also sum the pixels of another star of known brightness in the image to use as a check star (chk).
Those differences can be seen in the software outputs as sums of pixels or preferably magnitudes. For
some observing programs (eg. determining the times of minimum for eclipsing binary stars), those
differences are all the data that are required for the science program. In other types of programs they are
the first step toward calculating the measured observed brightness of your science target.
The photometry routine in your software is used to analyze the brightness of your target star, a
comparison star (“comp”), and a check star (“check”) on the calibrated science image. Your software’s
instructions will tell you how to invoke the Photometry routine. The fundamental purpose of the
photometry routine is to determine the total ADU counts received from each star – that is, the sum of the
ADUs on all pixels that received starlight from the selected star – and compare two or more stars to
determine the magnitude difference between the stars.
On the calibrated image, each pixel contains (only) “star+sky” ADU counts. The challenge for the
photometry routine is to subtract out the “sky” counts, and then add up only the “star” counts. Almost all
commercial image processing/photometry programs do this in the same way. The photometry routine
will place three concentric circles around a star that you indicate (as illustrated in Figure 3). The inner
58
circle is the measuring aperture. The software will add up the ADU counts of all pixels inside the
measuring aperture; this is proportional to the total “star+sky” photons that were received. Since you
want to know the total number of photons that were received from the star, make the diameter of this
measuring aperture large enough that it encompasses the complete star image.
Figure 3: Diagram of a typical set of measurement aperture and annuli showing the central aperture for
the star and a sky annulus that lets you determine the local sky background around the star (Courtesy
Robert Buchheim).
The outer two circles form an annulus (a donut). The pixels within this annulus contain only sky-glow,
with (ideally) no starlight. The sum of the ADU’s of all pixels in this sky-annulus indicates the sky
brightness. Your software will take the “star+sky” total from the measuring aperture and subtract the sky-
only total from the sky annulus (appropriately scaled by the number of pixels in each aperture). This
leaves a value that is equal to the star (only) ADU counts. This approach to photometry is sometimes
called aperture photometry, and it is by far the most commonly-used method.
Now, you have a value for the star-only ADU counts for the (one) star that you selected. That in itself is
of limited value, since the number will change with different camera settings, different sky conditions,
etc. Suppose, however, that you run through the same routine and determine the counts for another star in
the image. Call your first star the “target” and the second star the “comp”. The magnitude difference
between these two stars is
Since you’ve measured both stars on images taken through the camera’s green pixels, we’ll call this the
magnitude difference in green light:
Since this is a measure of the difference in magnitude between the two stars, this analysis is called
“differential photometry”. The wonderful thing about differential photometry using two stars in the same
59
image is that the difference is almost completely unaffected by changes in the camera settings, sky
transparency, etc. For example, suppose that you calculated Δmag from one image; and then took another
image with twice as long an exposure. Double the exposure will double the number of ADU counts on
each star; but the calculated Δmag will be unchanged, because it is a function of the ratio of the count
from the two stars.
If the comparison star’s brightness (Gcomp) is constant, then any change in Δmag = [Gtgt – Gcomp] is telling
you the brightness change in your target star. For many variable stars that deserve photometric
monitoring, the AAVSO charts identify candidate comp and check stars that have been confirmed to be of
constant brightness, and whose magnitudes and colors have been well-measured.
How do you confirm that your comp star is, indeed, constant over the time span of your observations?
That is the purpose of the check star. If both of these stars are non-varying, then the magnitude difference
between comp and check should be invariant:
If you see the “check minus comp” magnitude difference changing, then one or the other of these stars is
variable. You’ll need to decide which one (by, for example, comparing both of them to a second check
star), and replace it in your differential photometry analysis.
60
Chapter 6: Photometric Calibration
For some projects, this can be the completion of the photometric analysis. (Examples of projects that may
use only differential photometry are finding the “time of minimum” of an eclipsing binary star, or the
rotational lightcurve of an asteroid).
For many other projects, you will want to determine the “actual” brightness of your target star, on a
standard scale. For example, you may want to be able to report that your target star was magnitude 8.4 at
the time you observed it. The standard photometric systems are based on (a) a standardized brightness
scale, and (b) a set of standard spectral bands (colors, or spectral response functions). The “standardized
brightness scale” means that certain “standard” stars are given defined magnitudes, and the magnitudes of
all other stars are determined by reference to these standard stars. A standard spectral band means that
the brightness of a star is measured using a sensor that has a specifically-defined response to different
wavelengths. Astronomers have defined several standard photometric systems, for different technical or
historical reasons. For the moment – as a pretty good approximation that is perfectly adequate for many
situations – we will utilize your de-Bayered “G” image data, and defer a discussion of spectral passbands
until a later section.
Based on the assumption that your star images were not saturated, and that your image was properly
reduced, then there is a simple relationship between your measured differential photometry and the
standard magnitudes of the target and comp stars:
where [Gtgt – Gcomp] is the magnitude difference between target and comp, which is what you determined
with differential photometry, Vtgt and Vcomp are the magnitudes of the target and comp on the standard V-
band photometric system, and the symbol “≈” means “is approximately equal to”.
The significance of this equation is as follows: Suppose that you know the magnitude of your comparison
star (from a reference source). You can re-arrange this equation to give you the V-band magnitude of
your target:
61
So, if the target is 0.4 magnitude fainter than the comp star (i.e., [Gtgt – Gcomp] = 0.4), and you know that
the comp star is Vcomp= 8.0, then you can report that the target star is Vtgt≈ 8.4.
Where do you find the V-magnitude of your comp star? If your target star has an AAVSO chart and/or
photometric sequence, then that chart/sequence contains V-magnitudes for several stars in the field.
Select one of those stars to be your “comp” star. (If more than one sequence star is available in your
image, then select a second to be your “check” star).
If your target does not have an AAVSO chart or sequence, then the most convenient sources of standard
magnitudes are the APASS database or the “Homogeneous Means in the UBV System (Mermilliod
1991)” database. APASS is freely available and searchable at the AAVSO website. To use it, you only
need to know the celestial coordinates of your target star. The main APASS page is at
http://www.aavso.org/apass
http://vizier.u-strasbg.fr/viz-bin/VizieR
from there enter the coordinates in the search by position entry box; click on “go”. In the resulting
output, scroll down until you see the table “Homogeneous Means in the UBV System (Mermilliod
1991)”.
Most photometric analysis software programs will provide a way for you to enter the V-magnitude of
your comp star, and will then do the necessary calculations to report the V-magnitude of the target star,
based on your image data. (Which means that you never have to actually work through Eq. 3).
The approximation in the above equations and analysis is good enough that your data can be submitted to
AAVSO, to be included in their database of variable star observations. When you submit it, it should be
identified as photometry in the “TG” filter. [“TG” means that the photometry represents “measurements
using G pixels (only) from a tri-color digital sensor, and based on standard V-magnitude of the comp
star”]. This filter designation is used on the AAVSO submission forms to distinguish DSLR (and deep-
sky tricolor) photometry from several other filter systems.
TG magnitudes are valuable and useful contributions to the analysis of many short- and long-period
variables, novae and supernovae.
This analysis has some weaknesses, which is why we used the symbol “≈” (“approximately equal”)
instead a conventional “equals” sign. These weaknesses relate to sensor spectral response, and
atmospheric extinctions. The spectral response issue is that your camera’s “G” spectral response is not
exactly the same as the astronomer’s standard V-band, and no adjustment was made for this difference.
There are methods to determine the effect of this spectral band difference and to adjust your photometry
62
to eliminate the effect. This is called “transforming” your photometry to the standard system; that topic
will be covered in the next section.
The weakness related to atmospheric effects is that we have implicitly assumed that atmospheric
extinction is the same for the target and the comp star. So far, we have not attempted to assess this, nor
made any adjustment for atmospheric extinction differences between the target and comp stars. When
using relatively wide field-of-view images (made with relatively short focal-length lenses), which are
appropriate for many DSLR photometry projects, there is likely to be some atmospheric extinction
difference between the target and comp stars. There are ways to estimate the atmospheric difference, and
adjust for it; these will also be described in the next section.
6.2 Transformation
The treatment of DSLR “G” pixels as “almost V-band” is an approximation, albeit not a bad one for many
projects, stars, and situations. But, just as “R” (red) pixels have a different spectral sensitivity response
than do “G” pixels, so too the “G” pixel response is different from the astronomer’s standard V-band
spectral response. This difference in spectral response can translate into a difference in the assessed
magnitude of the target star, depending on the color of the target star and the color of the comp star. This
difference is particularly important when your measurements are going to be correlated with
measurements by other observers, whose system spectral response is different from your system spectral
response. The technique for putting your photometry onto the standard V-band photometric system is
called “transforming” it.
There are two imaging situations that can utilize slightly different approaches to transforms: (1) when
you are using your DSLR through a telescope, and hence have a quite narrow field of view (say, only a
few degrees), you can usually ignore differential atmospheric extinction. With a wider field of view, if
your target is high in the sky (say, no more than about 30 degrees from the zenith), you may still safely
use the “narrow field” approach. On the other hand, (2) when you are using your DSLR with a standard
or telephoto lens, and have a field of view of more than a few degrees, and your target star is more than
about 30 degrees from the zenith, you will probably want to account for the effect of differential
atmospheric extinction (in which your target star is seen through an atmospheric path that is measurably
different than the path to the comp). These two approaches to transforming your measurements to the
standard system are discussed separately below.
63
where Vcomp is the standard V-magnitude of the comp star, [Gtgt-Gcomp] is the measured differential
magnitude (i.e. the G-magnitude difference between the target and comp stars), T is your system’s
“transformation coefficient”, and CItgt = (B-V)tgt and CIcomp = (B-V)comp are the color indices of the target
and comp stars. You determine these by looking them up on the AAVSO chart/photometry, or in a
database such as APASS. (The color index for the target star may be a function of the phase in the light
curve for a variable, especially for pulsating stars. For simplistic transformation, we assume a mean
color, which does most of the correction to the standard system. For true completeness, you would
determine the calibrated color index of the target star simultaneously with the G magnitude, which is
beyond the scope of this manual.)
Note that on the right-hand side of this equation, you have measured [Gtgt-Gcomp]; and you will look up
Vcomp and the color indices CItgt and CIcomp in a reference source (such as APASS). In order to use
this equation to put your data onto the standard system, you must know the transformation coefficient, T.
You determine T for your system by conducting the little special project described below.
There are several fields around the sky where a large number of stars have been accurately calibrated,
with V magnitudes and [B-V] color indices. Examples are the “Landolt standard fields”, and several well-
characterized AAVSO fields, which can be found on the AAVSO website. For southern hemisphere
observers, the Cousins E region standard star fields at -45 degree declination are also recommended. You
can use such a field to calculate your system’s transformation coefficient, T. The basic idea is that you
will look at the difference (V-G) for a large number of stars with known photometry, and determine (V-
G) as a function of the star’s color index.
On the images of a well-characterized star field, use your photometry software to determine the average
“instrumental magnitude” (in G) of a dozen or more stars. Make an effort to select stars that span a wide
range of colors (ideally from B-V≈ -0.5 to B-V≈ +2, but you may not be able to get this full range; do the
best that is practical). For each star, derive the “instrumental magnitude” from the total ADU of the star
(your photometric software will properly sum the star “counts” within the measuring aperture and adjust
for the sky background). The equation for instrumental magnitude is:
where ADUstar is the sum of all pixels within your measuring aperture, minus the sky background (as
determined from the “sky annulus” pixels). Most photometric reduction software will do the calculation
for you, when you click on the star, and report it as the star’s instrumental magnitude.
Now use your spreadsheet program (e.g. Excel) to do the analysis that leads to your system’s transform.
For each star, enter the measured ADU count (or instrumental magnitude Gstar, if your software provides
it), calculate Gstar if necessary, and enter the star’s standard V-mag, and standard color index [B-V] into
your spreadsheet (one row per star). Plot “V-G” versus Color index; the result should be points that fall
pretty close to a straight-line. Use your spreadsheet’s linear trend line feature to find the best linear fit,
and display the equation of the fit. The transformation coefficient, T, is just the slope of this best-fit line.
(The y-intercept of the line is called the “zero point” by photometrists, but you can ignore it for now).
64
Figure 6.1 Best-fit line through V-G residual versus B-V color points used to determine the
transformation coefficient, TV, which in this case is -0.116. (Figure courtesy Mark Blackford)
If nothing ever changed, then this project to determine your system’s transform would need to be done
only once, because “T” is a parameter that is related to your system (telescope and camera), not to the
individual stars. Unfortunately, things do change ... dust accumulates, optical coatings degrade with age
and abuse, and (since the atmosphere is a part of your optical system) both weather and your pointing
direction might have an effect on “T” for your site. The only way to know if things are changing in your
system is to check occasionally, re-doing your determination of “T” (and keeping a record of the results).
How often should you do this? Opinions differ. Certainly it is a good idea to use data from more than
one night, and more than one field of standard stars, the first time you determine “T” (to get an idea of the
variability in your result). It is also wise to re-check your transformation coefficient each year or so. If
for some reason you need to image a target that is at an unusually high air mass, it will increase your
confidence if you also check your transformation coefficient using a field of standard stars at a
comparably high air mass.
You must also determine a unique T for each unique observing system. For example, if you have two
telescopes, then each “camera+telescope” combination will have its own unique transformation
coefficient. And if you change anything in the optical path (e.g. add or subtract a window, or buy a new
camera lens), then you must determine a new transformation coefficient for the newly-changed system.
We also highly recommend that you use several such images, calculate your transformation coefficient
for each one, and then average the values together to reduce the noise inherent in any analysis.
Now that you know your system’s transformation coefficient, you can transform your measurement
(taken in G-band) into a standard V-band magnitude, using the equation (Eq. 2) given before and repeated
below:
65
Vtgt = Vcomp + [Gtgt-Gcomp] + T[CItgt-CIcomp] Eq. 6
Note that in this equation, you do need to look up the color index of your target star in a catalog. This
means that there will be a slight ambiguity introduced even in this transformed Vtgt value, if your target
star’s color index changes over time. This is not uncommon with long-period pulsating variables; and is
sometimes noticeable on several different types of stars with short-period fluctuations. Still, even granted
this slight ambiguity, the use of “transformed” magnitudes is more accurate than untransformed; and
transforming your results to standard V-band makes it easier to correlate your measurements with those of
other observers.
However, when you use your DSLR camera with a normal camera lens, you probably have a fairly wide
field of view – easily several degrees and maybe larger than 30 degrees. For some projects involving
bright stars, it is necessary to take advantage of this wide field of view, because your target and suitable
comp star may be widely separated. If they are more than a few degrees apart, then their light passes
through different atmospheric paths, and hence the “differential atmospheric extinction” can be
significant. The importance of this effect grows as (a) the separation between the two stars increases, and
(b) as one or both is farther from the zenith. (Note: we are ignoring another effect called second order
extinction, which is dependent on the color of each star, but which is usually a much smaller effect than
normal differential extinction.)
In this situation, “transformation” must include the effect of spectral-response differences (as above) plus
the effect of differential atmospheric extinction. This is done by adding one more term to the differential
photometry equation:
In Eq. 7, k is the “atmospheric extinction coefficient (measured in magnitudes per air mass), and Xtgt and
Xcomp are the air mass of the target and comp stars, respectively.
The airmass expresses how long an atmospheric path the star’s light passes through on its way to your eye
or camera. For small airmass (say, X<2), airmass is calculated by X = sec(z) where z is the angular
66
distance of the star from the zenith. Most photometric analysis software will calculate airmass for you, if
you give it your location and time zone, the image time, and the equatorial coordinates of the star.
The other terms in Eq. 3 are the same as defined for Eq. 2.
The discussion below explains one way to determine the transformation coefficient and atmospheric
extinction coefficient (simultaneously), for the situation of wide field-of-view images. Several fields
have been well-characterized specifically for the problem of wide-field images: the open cluster Messier
67 (M67), IC4665, and the Coma Berenices cluster.
When do you need to use Eq. 7, and the mathematically-intense procedure described below to determine
the atmospheric extinction coefficient? If your field of view is aimed within about 20 degrees of the
zenith, and is less than about 20 degrees wide, you can usually safely use the simpler transform approach
described above for Eq. 2. As your field grows larger, or your target or comp star is more than about 30
degrees from the zenith, then the importance of differential atmospheric extinction grows, and it is
important to use Eq. 3 if you are going to transform your magnitudes to the standard V-band.
First-order airmass corrections may be applied to DSLR images using the following equation (Henden
and Kaitchuck 1982):
where the newly introduced variable, k, is the extinction coefficient, and X is the airmass. This equation
has the same functional form as a geometric plane in three dimensions: z = Ax + By +C. Kloppenborg et
al. (2012) gives a method for solving this equation. If we assume that the instrumental magnitude, v,
depends only on the right side of the above equation, then we may solve for the coefficients k, T and ZP,
using a minimum of three calibration stars in the field of view. However, if one calibration star is “bad”,
either incorrectly identified or its magnitude/airmass improperly calculated, the coefficients will be
skewed. For this reason, we usually take many more stars and do a linear least squares solution to remove
any outliers and improve the resultant coefficients.
67
A least-squares fit of n calibration stars to the plane defined by the equation z = Ax + By + C is found by
solving for the coefficient matrix, X, in the following expression, using the inverse of A:
AX = B Eq. 9
It is not necessary to write software to solve these equations, as many spreadsheet programs and
programming languages have similar built-in routines. For example, Excel has the “linest” function. If
you wish to write your own reduction algorithm, Python’s “scipy.optimize.leastsq” function can be used
for this task.
● “Obstype”: Assuming that your instrument was a DSLR, then your data type is “DSLR”.
● “magnitude” is the “Vtgt” magnitude, determined by one of the methods described above. The
“Mag Error” either comes from your software package, or can be determined by taking 3 or more
images of each science field, averaging the derived magnitudes, and determining the standard
deviation.
If you used the simple simple photometry formula (Eq. 1) to estimate the target’s V-mag, then do not
check any of the check boxes.
If you used one of the transformation methods, Eq. 2 or Eq. 3, then check the “transformed” box.
68
Check the “differential” box only if you are submitting [Gtgt –Gcomp], without determining the comp
star V-magnitude. Note that AAVSO discourages this data format, unless for some reason it is the only
feasible approach.
● “Filter”: If you used Eq. 1 (i.e. you have not transformed your data from “G” to standard V-
band), then show the filter as “TG”. If you did transform to standard V-band (i.e. you used Eq.2
or Eq. 3), then set “Filter” to “V” (and also check the “transformed” box above).
● “Chart ID”: If at all possible, use an AAVSO chart. DSLRs tend to have wide field, and VSP
doesn’t make nice wide-field charts, so you may use another method, but be sure to identify the
chart as thoroughly as possible.
● Comp, Check stars: use the instrumental magnitudes as determined above.
● Airmass: use the mean airmass between the target star and the comparison star.
● Group: If you are submitting B,G,R magnitudes from a single frame, indicate this by giving all
three measures the same group number. It helps in identifying that the magnitudes were obtained
from the same image. (Note: this manual ONLY describes the process for the green channel in
order to simplify the explanation. The B and R channels do not correspond well to standard
Johnson-Cousins B and R bands.)
After pressing Submit Observation, you will be given a chance to look at your submission and confirm
that it is correct.
The other method of submitting data is more complex, and involves submitting a file of observations
rather than one at a time. Most software packages provide an output called “AAVSO Extended Format”,
which is what you should use as a file type. Be sure that the “obstype” in that output file is set to DSLR.
There is an option in WebObs to browse for your file and then submit it, with a similar checking and
permission process as for the individual submissions.
You should go to the Light Curve Generator after submission and see how your data compare with other
observers. It is fun watching a light curve being built in real time!
69
Chapter 7: Developing a DSLR Observing
Program
In an effort to simplify that decision process, we provide a list of 10 stars for northern observers and 10
stars for southern observers that we feel are good stars for newcomers.
Knowing the limitations and potential of your equipment is important. How faint are the faintest stars you
can detect with your equipment? What are the brightest stars you should attempt to observe in your
telescope? Is there a limit? You don’t need to know everything about DSLR cameras. You just need to
know everything you can about your DSLR camera.
What you’ll be able to observe is determined largely by your observing site, sky conditions, how often
you can go out, and how long you can stay out to observe, as much as by the type of equipment you use.
Your experience comes into play here also. Are you organized and prepared? Do you know the star
fields? Can you point your camera at the target efficiently? Do you know how to get the highest precision
measurements from your equipment?
Later on, when you decide to expand your observing program there are several criteria you will want to
take into account.
It is important to understand the value of DSLR observations and which stars and what areas of research
hold the most promise for DSLR observers to make a valuable contribution to science. You need a way
to assess the current and likely future landscape of variable star research to find answers to these
questions. One easy way is to look at the observing programs, opportunities and campaigns provided by
or recommended by the AAVSO and other variable star organizations.
70
7.1.4 The Fun Factor
If you don’t think this is fun you won’t do it. So what makes it fun for you? Is it the challenge? Is it being
outside and in touch with the Universe? Are there some stars you like to observe just for the fun of it?
How serious do we need to be? Is any of this important? Can we take this seriously, contribute to science
and still have fun?
Believe it or not, this makes a difference. We want you to be successful, happy and productive. If this
isn’t fun for you, you won’t be.
For newcomers to DSLR photometry it is desirable that the amplitude of the target variable is more than
0.3 magnitude. Unless sky conditions are good it is difficult to get accurate measurements if the
amplitude is less. To give newcomers a challenge we have chosen one star - Beta Cephei - which has a
small amplitude. In its favor this star is (in the northern hemisphere) high up. As the light from the star
does not have to travel through so much atmosphere as a lower-down star greater accuracy is possible.
We have specified 30 images for a Beta Cep measurement to make it possible to achieve the necessary
precision.
We have not chosen stars of magnitude less than 9 because we are recommending what is achievable
using an undriven camera on a tripod. Commonly used lenses for a DSLR are the 85mm, 100mm and
200mm lenses. The latter gathers the most light and can enable precision photometry down to magnitude
9. The 85mm and 100mm lenses are good down to about magnitude 8.
We think that all the stars on our recommended list are easy to find with commonly available planetarium
programs or from a basic knowledge of the brightest stars in the sky. We have facilitated the search with
customized charts, bearing in mind the likely field of view of a DSLR and a common lens.
For precision photometry a number of comparison stars and a check star are required. This is called
ensemble photometry. The comparisons and check star have to be near the target star so that all the stars
are, more or less, shining through the same thickness of atmosphere. If this was not the case errors would
be introduced which would make precision photometry difficult. If the stars were too far apart precision
work would be impossible.
All the stars we have chosen are interesting and/or ‘famous’ stars. They are all of interest to professional
astronomers. They all illustrate important stages in the evolution of stars. Mira was noted, in 1596, as the
first variable star. Mu Cephei is a large star in the final stages of evolution and soon to be a supernova.
We consider that some of our recommended stars are suitable for high school and college projects. Many
of our stars have extensive references on the web which will be helpful for a student project.
71
Some of the stars we have chosen have regular variation (such as the two Cepheids and two eclipsing
binaries in the northern hemisphere) and three (in the northern hemisphere) have a cycle which is fully
completed in under four hours. A study of the cycle can therefore be completed in an evening observing
session.
The list includes some recommended settings. These settings are for undriven cameras. Driven cameras
will have different settings. The ISO will be lower (100 or 200) and the exposure may be longer (to
minimize the effect of scintillation). The settings are good for 85mm, 100mm and 200mm lenses. The
numbers refer to focal length of the lens. The key attribute of a lens is the aperture (which determines the
light it gathers). The 85mm and 100mm lens have apertures of 58mm and the 200mm lens has an aperture
of 72mm. If you do not have one of these lenses you should adjust settings accordingly, only taking
account of aperture. It is worth noting that the full aperture of a lens is only used at its lowest f stop. As it
is ‘stopped up’ the aperture is reduced. If you are using a zoom lens then you will be using the setting
with widest aperture.
NORTHERN
STARS
Name Observing Magnitude Type of Period Notes
time Range Variable (days)
Z UMa Year Round 6.2 - 9.4 Semi-regular 195.5 Can be observed every 5 days. You may
variable need to change settings to take account
of large change in magnitude.
delta Cep Year Round 3.5 -4.4 Classical 5.366 Can be observed twice in one night or
Cepheid once before midnight. It is a famous
historical variable star with a regular
distinctive light curve.
Algol August to 2.12 - 3.39 Eclipsing 2.87 The eclipse lasts about 8 hours.
May Binary Measurements should be made for at
least two hours each side of the
predicted minimum. To produce a
reasonable light curve you will need at
least ten measurements. Measurements
can be made every 15 minutes.
beta Lyr April to 3.3 - 4.4 Eclipsing 12.9 It is a semi detached eclipsing star
November Binary which means that it is in continous
eclipse. For most of its period one
measurement per night is sufficient. For
around the primary minimum ( a one
and a half day period) measurements
can be made every hour).
mu Cep* (red) Year Round 3.43 - 5.1 Semi-regular 835 One measurement per night is sufficient.
variable
eta Aql Summer Star 3.48 - 4.39 Classical 7.176641 Can be observed twice in one night or
Cepheid once before midnight. It is a famous
historical variable star with a regular
distinctive light curve.
Mira* (red) August to 2 - 10.1 Mira 331.96 This star is measureable for 100 days
February either side of maximum.
72
R Lyr April to 3.81 - 4.44 Semi-regular 46: One measurement per night is sufficient.
November variable
bet Cep All year 3.16 - 3.27 Beta Cephei 0.1904881 As it has a very small amplitude and
round pulsating will require 30 images to make a
variable measurement in good sky conditions. It
has a regular period and is continously
changing. A whole period can be
measured in one session with
measurements every five minutes.
BE Lyn October to 8.57 - 8.97 High 0.09586954 It has a short regular period which can
April Amplitude studied in one session with ten
Delta Scuti measurements every five minutes.
(HADS)
V474 Mon November to 5.94 - 6.31 High 0.136126 It has a short regular period which can
March Amplitude studied in one session with ten
Delta Scuti measurements every five minutes.
(HADS)
SOUTHERN
STARS
W Sgr 4.29- 5.14 Classical 7.59503 W Sagittarii is a triple system made up
Cepheid of the cepheid, a close F5 dwarf, and a
more distant A0 star.
kap Pav 3.91 - 4.78 W Virginis 9.083 Kappa Pavonis is by far the brightest
pulsating example of a Population II (old)
variable cepheid. Low mass stars less luminous
than classical cepheids. Kappa Pavonis
displays abrupt period changes.
Continuous monitoring helps track
them.
bet Dor 3.41 - 4.08 Classical 9.8426 Beta Doradus is only 0.1 mag. fainter
Cepheid than l Car and follows it in the ranking.
Classical cepheids are massive
Population I (young) stars.
l Car 3.28 - 4.18 Classical 35.53584 l Carinae is the brightest Cepheid after
Cepheid Polaris in the sky, if we take apparent
magnitude. Note “l” is lower case L.
R Car 3.9 - 10.5 Mira 307 R Carinae will require the use of several
different settings to cover its entire
range.
V Pup 4.35 - 4.92 Eclipsing 1.4544859 V Puppis is an eclipsing binary of the
Binary beta Lyrae type. Brightness changes are
continuous due to the ellipsoidal shape
of the components.
R Dor 4.78 - 6.32 Semi-regular 172 R Doradus is a semiregular star showing
variable two maxima and strong amplitude
changes from cycle to cycle. One of the
largest stars with a diameter measured
interferometrically from Earth.
zet Phe 3.91 - 4.42 Eclipsing 1.6697671 Zeta Phoenicis is an Algol-type
Binary eclipsing binary. You will find the star
at maximum brightness most of the
time, so catching an eclipse will require
patience.
73
RY Lep 8.05 - 8.46 High 0.2251475
Amplitude
Delta Scuti
(HADS)
RS Gru 7.94 - 8.48 High 0.1470117
Amplitude
Delta Scuti
(HADS)
A number of considerations contribute to planning an observing session. You have to make an estimate of
the number of hours that will be available for observing. You may have one hour or a few hours or a
whole night. If you have an hour then you will not be able to observe an eclipsing binary that will require
measurements over four or more hours. But in one hour you may be able to measure five stars that only
need one measurement per night. There may be a weather forecast that the whole night will be clear but
you will not tackle an eclipsing binary because no eclipse is forecast for that night. A suitable star may set
too soon for a measurement or may not rise high enough until well after midnight. Some stars have their
season for observing. At other times they are below the horizon or too near the sun. However some stars
will always be potential targets such as Delta Cephei and Beta Cephei. Sometimes an interesting and good
night’s work can be done by testing equipment and settings. You can check on the difference in
magnitude of two unvarying stars. Another worthwhile experiment is to check on the accuracy of
measurements when taking ten, twenty or thirty images.
If you plan to observe several stars in a session make sure you carefully note any changes you have to
make in the settings as you move from one star to another. It is all too easy to forget to change settings
when you are tired later in the night.
If you are a visual variable star observer you can, of course, choose to study a favorite star applying the
principles of this manual.
74
Locating a variable star is a learned skill. To aid the observer, finding charts with well-determined, visual-
magnitude sequences of comparison stars should be used. We urge our observers to use these charts in
order to avoid the conflict that can arise when magnitudes for the same comparison star are derived from
different sets of charts. This could result in two different values of variation being recorded for the same
star on the same night.
The standard AAVSO charts are now generated with the on-line Variable Star Plotter (VSP). These have
completely replaced the old, pre-made paper or electronic charts.
http://www.aavso.org/vsp
75
An explanation of the VSP online form follows.
CHOOSE A PREDEFINED CHART SCALE This drop down menu allows you to set the field of view
according to the old finder chart scales. On the menu you will see designations ‘A’, ‘B’, ‘C’, etc. for
example, an ‘A’ chart will show you 15 degrees of sky and stars down to 9th magnitude. A ‘B’ chart will
show you 3 degrees of sky and stars down to 11th magnitude. You need to use a chart, or series of charts,
that cover the range of magnitudes of the variable star you are observing. This is also determined by the
instrumentation you are using. A table of scales is given below.
76
CHOOSE A CHART ORIENTATION
This option will help you to create a chart which, when viewed upright, will show the stars in the same
orientation as that seen in your observing equipment. For example, if your telescope gives you an “upside
down” image (as with a refractor or reflector using no diagonal), you will want to use the “Visual” option
which will give you a chart having south at the top and west toward the left. If you use a diagonal, you
may wish to select the “Reversed” option which creates a chart with north up and west to the left. The
“CCD” option creates a chart with north at the top and east to the left that can also be useful for binocular
and naked eye observing.
PLOT ON COORDINATES
Instead of typing in a star’s name, you may enter the RA and DEC of the center of the chart you create.
When entering coordinates, you must separate the hours, minutes, and seconds of RA with either spaces
or colons. The same applies to separating degrees, minutes, and seconds in Dec.
WHAT WILL THE TITLE OF THE CHART BE? The title is a word or phrase you’d like to see
displayed at the top of the chart. You do not need to enter anything into the title field, however a short
title can be very useful. Include the star name and chart type such as, “R Leonis B Chart.” The big letters
are easier to see in the dark and knowing the chart scale may be useful. If you leave this field blank, the
star’s name will appear in the title field on the chart.
77
The Comment field can also be left blank, but if you create a chart for a specific purpose that can’t be
explained in the title field, this is the place to do it. Comments will be placed at the bottom of the chart.
FIELD Of VIEW
This is the chart’s field of view expressed in arc minutes. Acceptable values range from 1 to 1200 arc
minutes. when you use a predefined scale from the drop-down list mentioned earlier, the FOV will be
filled in for you automatically.
MAGNITUDE LIMIT
This is the limiting magnitude for the field. Stars fainter than this will not be plotted. Be careful not to set
the limit too faint. If the field for the star you wish to plot is in the Milky Way, you could wind up with a
chart that is completely black with stars!
RESOLUTION
This refers to the size of the chart as seen on your computer screen. A resolution of 75 dpi is the default
value for most web pages. Higher resolution will give you better quality, but larger images that may not
fit on a single printed page. When in doubt, it is probably best to use the default value.
HOW WOULD YOU LIKE THE OUTPUT? Select “Printable” to get a chart suitable for printing.
WOULD YOU LIKE A BINOCULAR CHART? Selecting this option produces charts that only label
specially selected comparison stars useful for observing stars in the AAVSO Binocular Program.
Generally, this means only a handful of comparison stars brighter than 9th magnitude will be shown near
these bright binocular variable stars. you will know when you are in this mode because binocular charts
78
are plainly marked in the upper right hand corner. Remember to deselect this button when you wish to
make telescopic charts again.
An ephemeris (plural: ephemerides; from the Greek word ἐφηµερίς ephēmeris "diary", "journal") is a
table of values that gives the times and dates of mid-primary eclipse of an eclipsing binary or, for
pulsating stars (Cepheids and Miras) the date and time of maximum. The ephemeris of the stars on our
recommended list can be found in the AAVSO International Variable Star Index (VSX)
http://www.aavso.org/vsx/
Enter the name of the star you want to observe, for example W Sgr, and click “search”. The result page
will have a link called “Ephemeris” on the 12th line. Click on that and the next several maxima of this
Cepheid variable will be displayed. Note: the ephemeris will tell you the next several maxima for
pulsating stars like Cepheids and minima for eclipsing binaries.
If your target star and comparisons approach the zenith during the night then they will appear brighter
than when they were closer to the horizon (although the difference remains the same). You may need to
change the settings to maintain a good signal to noise ratio.
If fog is forecast then your camera and lens may mist up. This may occur before the actual fog. You have
to plan your measurements to take place before the fog. There can be dew on your camera without fog.
You have to be alert to this happening as it will ruin measurements. You can minimise the problem by
transferring your set up to a cool but drier environment (such as a garage) in between measurements. You
can cover the camera, in between measurements, with a plastic bag, which preserves a drier environment.
79
If the ambient temperature is warmer or colder than your internal house temperature then it is advisable to
put the camera out for twenty minutes so that the camera and lens can thermally adjust.
If you live near the ocean be aware that condensation can be salty, which is very bad for camera and
lenses.
Although a bright moon can have a significant impact on visual observations of variable stars the effect
on DSLR measurements is negligible. A bright moon has no effect on measurements in the 50% of the
sky opposite to the moon. Provided the camera lens has a hood, measurements are possible quite near to a
bright moon provided no light from the moon falls directly on the camera lens. A rule of thumb might be
that the only measurements that will be precluded are those of stars in the constellation in which the moon
happens to be situated.
80
lasting from a month or more to several hundred days.
Recurrent Novae Recurrent novae, which differ from typical novae by the 1
(NR) fact that two or more outbursts (instead of a single one)
separated by 10-80 years have been observed. Examples:
T CrB, T Pyx.
RV Tauri (RVTAU) Variables of the RV Tauri type. These are radially 2-5
pulsating supergiants. The light curves are characterized
by the presence of double waves with alternating primary
and secondary minima that can vary in depth so that
primary minima may become secondary and vice versa.
The complete light amplitude may reach 3-4 mag. in V.
Periods between two adjacent primary minima (usually
called formal periods) lie in the range 30-150 days.
S Doradus (SDOR) Variables of the S Doradus type. These are eruptive, high- 5 - 10
luminosity stars showing irregular light changes with
amplitudes in the range 1-7 mag in V. They belong to the
brightest blue stars of their parent galaxies. As a rule,
these stars are connected with diffuse nebulae and
surrounded by expanding envelopes. Examples: P Cyg , η
Car.
Supernovae (SNe) Supernovae. Stars that increase, as a result of a final 1
explosion, their brightnesses by 20 mag and more, then
fade slowly. According to the light curve shape and the
spectral features, supernovae are subdivided into types I
and II.
Semi-Regular (SR, Semiregular variables, which are giants or supergiants of 5 - 10
SRA, SRB, SRC) intermediate and late spectral types show-ing noticeable
periodicity in their light changes, accompanied or
sometimes interrupted by various irregularities. Periods lie
in the range from 20 to >2000 days, while the shapes of
the light curves are rather different and variable, and the
amplitudes may be from several hundredths to several
magnitudes (usually 1-2 mag. in V)
Dwarf Novae (NL, U Geminorum-type variables, or "dwarf novae". Close 1
UG, UGSS, UGSU, binary systems consisting of a dwarf or subgiant star that
UGWZ, UGZ) fills the volume of its inner Roche lobe and a white dwarf
surrounded by an accretion disk. Orbital periods are in the
range 0.05-0.5 days. From time to time the brightness of a
system increases rapidly by several magnitudes (outburst)
and, after an interval of from several days to a month or
more, returns to the original state. According to the
characteristics of the light changes, U Gem variables may
be subdivided into three types: SS Cyg-type (UGSS), SU
UMa-type (UGSU), and Z Cam-type (UGZ)
Young Stellar Young Stellar Object of unspecified variable type. Pre- 1 or less
Objects (YSOs) main sequence star, likely T Tauri.
active state
Young Stellar 2-5
Objects (YSOs)
inactive state
Symbiotics (ZAND) Symbiotic variables of the Z Andromedae type. They are 1
close binaries consisting of a hot star, a star of late type,
and an extended envelope excited by the hot star’s
radiation. The combined brightness displays irregular
variations with amplitudes up to 4 mag. in V.
81
Constant Star Training
Before you try photometry on variable stars it is a very good scientific exercise to practice photometry on
constant stars. You can pick two close unvarying stars and measure the difference in brightness. You then
compare your estimation of the difference with the predicted difference. You may find that using your
first few clear nights for doing these exercises in DSLR photometry helps a lot when you tackle your first
variable star(s).
This is a very good method of developing your knowledge and skill with your equipment and the camera
settings. You can experiment with the effect of changing the length of exposure, the ISO and the f stop.
You can work out which settings produce the most precise measurement. You can experiment with the
number of images that are required for the best result from a minimum of ten to a maximum of fifty.
You can experiment with a pair of stars that have a magnitude difference of over 0.5, a difference of
around 0.2 and a difference of around 0.1. You may find out that, for example, that you can only compare
stars with a difference of 0.1 magnitude when observing conditions are very good.
You can experiment with a pair of stars around magnitude 3 and compare your overall results with those
of two stars of around seven. You can work out the practical range of your lens for doing precise
photometry.
In choosing a pair of stars you should go for a pair that is of about the same color. This is because the
predicted difference in magnitude is the Johnson V magnitude. Unless the stars are of nearly the same
color the difference you determine may be different despite the correctness of your settings and
procedures.
Check List
82
Appendix A: Determining Optimal Exposure
Times and Saturation Limits
Your first sessions with the digital camera on the sky should be "fun". Practice taking pictures of the sky
with the camera mounted on a tripod with a basic lens (e.g. whatever lens came with your camera body)
and take time exposures of the sky. Use manual exposure settings so you can experiment with the time
exposure and the aperture (use the widest aperture for your first images). Experiment to detect stars that
you cannot see naked-eye, especially faint constellations as well as bright constellations.
When it comes time to perform photometry on variable stars and make scientific contributions, it is
important to find the proper exposure times to avoid saturation of the images. You can use your camera
mounted on a telescope for faint stars (magnitudes ~7-12) or a tripod and camera lens (focal length
between ~55 mm ~200 mm) for the brightest stars (brighter than magnitude ~ 7).
Before you take images to make photometry measurements, you should find the exposures needed to keep
your stars in the range of proper exposure (sufficient signal, but not saturated). Because your optimal
exposure time will depend in part on whether your camera will track the sky, we start by discussing how
to target a star field using three different methods of observing: tripod, guided, and prime focus.
Camera on a tripod
With the camera mounted on a tripod you must first find the target. If you have a telephoto zoom lens
begin your observations with the zoom focal length setting at the shortest focal length. For a "normal"
zoom lens that comes with DSLR kits, set this kit lens to its longest focal length (typically 55 mm). If
you wish to use an intermediate focal length setting, it is suggested that you tape it down with sticky tape
so that you do not accidentally move the zoom barrel during the session. Bright stars are often too faint
for naked-eye visibility but too bright for large telescopes fitted with cooled CCD detectors, but the
digital camera at its highest ISO setting can see and photograph many stars in a light-polluted sky that our
eyes cannot. You will also need a print-out from a planetarium program of the star fields (or use the
AAVSO finder charts printed for your target with a wide field of view comparable to your camera).
Practice taking exposures that obtains stars and you can identify the stars on the finder charts.
Guided mount
If you have access to a "go-to" telescope you may mount the camera piggy-back onto the telescope and
align the photo-axis of the camera roughly in line with the telescope axis. The finding of the star then
becomes much simpler, and you will benefit from the ability to track star by means of the telescope clock
drive.
Prime-focus mount
If you mount the camera at the prime-focus of your telescope it is assumed that you have the telescope
aligned and finder scope aligned so you can use the "go-to" feature to take your camera to the target.
Shoot alignment images, it may take several, to make sure that the target is in the center of your image.
The way to check this is to download the image to your computer, bring it up in your photometry
program, and look at it.
83
Whenever you successfully find the star field in your camera’s field of view (tripod mount, piggy-back, or
prime focus) you are ready to find the optimum exposure time with the ISO setting at 100 or 200. You
should use these lower ISO settings (100 or 200) for photometry work to allow your camera to measure a
wider range of signals (the "dynamic range") with finer precision. With a focal length larger than about 50
mm, set your aperture f/# to the lowest value (most light). You must experiment taking a series of
photographs of your star field at the widest aperture, consistent ISO (100 or 200), and vary the time
exposure from 1 sec to 2 sec, 4 sec, 8 sec, etc. until you believe that that the brightest stars are saturated.
Load this series of raw images into your image processing program. [Caution: Your image processing
software may use up substantial computer memory if you do this with more than just a few images.
Freeze and extract at least one of the green channels (see the image manual for instructions). Do not
compensate for dark current or flat fielding at this stage. Obtain the green channel for each of your star
field images and inspect the pixel values of the brightest comp star of each image. Look up the header
(FITS header) that your image processing software provides, and make sure that you are using the time
exposure sequence mentioned above and the proper ISO setting (100 or 200) and not one of your aiming
images.
The star field should have several stars in it with a range of brightnesses. Perform photometry of these
stars as outlined in Chapter 5 by putting a measurement aperture over each star and measuring the counts
from each star. Measure the same stars from the next image of the same field. You should note how the
counts in the stars scale with time; you should see all of the stars scale by the same relative amount until
the brightest star saturates. When you find the image on which the brightest target saturates, you will
want to record the exposure time of the prior, unsaturated image as the optimum exposure time for that
field and/or for a star having that same brightness. You can continue this process for other stars of known
brightness until they too saturate, and then record their optimum exposures using the same criteria.
Record-keeping is important, so you should record not only the exposure time but also the ISO setting
and the focal ratio used (as well as the focal length of the lens if this can or will change). Once you make
these measurements, they should remain valid for nearly all future observations barring special
circumstances (e.g. hazy or non-photometric conditions, or brighter skies).
84
Appendix B: Linearity Check (DFC) and
Characterizing the DSLR
It is important with DSLR and CCD `imaging that the stars in the images are not saturated. Think of a
meter that measures the brightness of an object where the needle response represents the brightness. If
the brightness is brighter than the maximum position of the meter needle, the full brightness isn’t
measured. It is important to adjust the exposure time and/or the ISO value so that the images are not
saturated. A common error among beginners is to over-expose their images that lose all the photometry
values.
1. Find a diffuse wall painted in a flat pastel color in a room that is illuminated by a moderately faint
incandescent lamp or LED lamp. It is best to avoid fluorescent lamps on account of the
flicker. Set the DSLR to a low ISO (100-200).
2. Be sure to set your DSLR to the raw format.
3. Use manual exposure where you can manually adjust the both the exposure time and the lens
aperture f/no. Make your initial exposure at 1/20 sec at f/4 to f/8.
4. Look at the histograms of the exposure you just made using the display mode of a histogram on
your camera. If you are in the non-saturated region your histogram should have a rounded
peak. If the histogram plot runs too far to the right then reduce the exposure time or lower the
ISO number (if possible). If the histogram plot runs too far to the left then you should increase
the exposure. It is important that your exposures for stars fall in the linear region where the
images are not overexposed.
5. Be sure to record the exposure time, ISO value, and f/no as lighting conditions into a log book for
future reference.
Another method of testing the linearity of the images is to do a similar experiment by examining the pixel
value of a certain region of the image of the diffuse wall for various exposure times.
1. Set your aperture f/no to the same f/no you used in the previous experiment; set the exposure time
to a value too small (about 1/10 the value you found for the “proper” exposure in the previous
experiment) and proceed to take several images of the diffuse wall. Preferably mount the camera
on a tripod and focus on the wall. For each image double the exposure time and repeat until the
histogram on the camera display indicates that the image is completely saturated.
2. Load each raw image into an astronomical image processing program such as AIP4WIN or
MAXIMDL.
3. For each image, extract the green channel(s) of the Bayer arrays. There are two green channels,
usually position 2 and 3 for each Bayer array.
4. For each green channel image, record the pixel values (ADU’s = Analog Digital Units = the
numbers representing the brightness of each pixel) for a small region (10 x 10 pixels, for
example) near the center of the image (be sure to use the same small array of pixels; record the
exposure time (also called “integration time”) and calculate and plot the average pixel value of
that small area (in ADU’s) as a function of exposure time. Your graph should be linear up to the
saturation value then level off. You should feel comfortable entering the data onto a spreadsheet
to make the plot. If the graph looks erratic (“noisy”) then try sampling and averaging a larger
area of pixels, but be sure to sample the same pixels from image to image, before making the
graph. Another cause of erratic behavior could be variations in the intensity of lighting between
images, not sampling the same pixels from frame to frame because the illumination is not
85
uniform, or sampling too few pixels. If your pixel-value versus exposure time graph is curved,
but not erratic, then your image downloaded from the camera is not a raw format or the camera
response is non-linear. The advantage of DSLR cameras is that the raw images show a linear
response to the integrated amount of light exposure. This is an inherent property of CCD and C-
MOS detectors in cameras. If the graph is still non-linear when using the raw mode, then you can
still accomplish photometry with your camera by keeping the exposures down in the linear
region. Be sure to record the ISO setting and the largest permissible ADU value to avoid
saturation. The largest ADU value is slightly dependent on the ISO value. Also remember that
changing the ISO setting will change the amount of exposure time before the pixels are saturated
for a given star. When you photograph stars for photometry you should examine the green
channel for the brightest stars that you plan to measure and adjust the exposure time low enough
to avoid saturation but large enough to make the stars detectable. Remember also that the peaks
of the stars will saturate before the faint stars or before the background of the image.
86
Appendix C: Calibration Pre-assessment:
Testing Dark Images for Hot Pixels
It is important to evaluate your images for various aspects like saturation, signal level, defocusing, etc. in
order to choose the optimal settings for your camera. While doing these steps, we can also determine the
position and level of potential defects in the image sensor such as hot pixels (or “dark impulses” in digital
image processing terminology). You can use that information to decide if a more advanced dark
calibration process is needed and avoid any potential drawbacks of various techniques.
A simple solution is to process a series of images with and without a dark process; if the differences are
just a few millimagnitudes it means hot pixels are no problem. A few mmag could easily be due to the
added random noise from the master dark correction process.
Another way is to view and measure the hot pixels; tools are available in most photometry software. Load
a dark image, select a small area and zoom in to the point where you can clearly see the pixels as small
squares. Random noise is the grainy dark background; hot pixels are a small number of brighter pixels
(e.g. as seen in Figure 4.2). Most photometry software lets you position the cursor over a pixel and read
off the ADU value.
Figure 5.1 Line profile showing ADU values along an approximately 300-pixel section of a long exposure
image. The fluctuations around ~50 counts (ADU) are due to random noise. The prominent spikes are hot
pixels (image by Roger Pieri).
You can also use graphing tools to view cuts or statistical information about your images. For example, in
Figure 5.1 we show part of a row of pixels from an image in which the hot pixels are obviously much
stronger than the background random noise. If the hot pixels were not much higher than the background
level, we could consider it negligible and not apply the dark process.
If you like statistics, you can generate a histogram of your image like that seen in Figure 5.2. The large
peak with a mean of ~1800 ADU is the distribution of random dark-current noise; a secondary bump at
~2500 ADU shows a slightly different population pixels that have more dark current. The widths of these
Gaussians will increase as the sensor’s temperature rises. The tail to the right is classical hot pixels, with
much higher dark response than the majority of the pixels in the sensor.
87
Fig 5.2 Histogram of a raw dark image (600 seconds at ISO 1600, T=20∘C). The graph is scaled
logarithmically with the number of pixels on the vertical axis; most pixels lie within the left-most peak
and a secondary peak to its right, whereas the classical hot pixels are represented by the tail that extends
to the right. The distribution of hot pixels is well established (image by Richard Berry).
88
Appendix D: Testing Flats for Uniform
Illumination
No matter which flat field method is employed it is important to check how even the illumination is. A
one percent variation in across the image can result in a measurement error of about 0.01 magnitudes.
An easy way to check illumination uniformity is to make master flat frames from two sets of images, the
second set recorded after rotating the camera (or light box) by 90 degrees. Divide one master image by
the other and measure pixel intensity (ADU values) across the diagonals of the resulting image. There
will be random fluctuations due to counting statistics but ideally there should be no systematic increase or
decrease in intensity.
Figure 5.3 Line profiles obtained from dividing two master flats showing the result of even (left) and
uneven (right) illumination (plots by Mark Blackford).
An example is shown in Figure 5.3 where the uneven illumination was achieved by removing one of eight
incandescent globes from the light box. The right graph shows 1% systematic variation along diagonal 1
(blue) and 0.2% along diagonal 2 (red). When all eight globes were used, left graph, systematic variation
along diagonal 1 (blue) was less than 0.1% but along diagonal 2 (red) it increased slightly to 0.3%.
You should aim for less than 0.5% systematic variation in illumination across your master flat frames.
89