On The Practical Implementation of A Geometric Voting Algorithm For Star Trackers
On The Practical Implementation of A Geometric Voting Algorithm For Star Trackers
Abstract
We present our implementation of an algorithm (Kolomenkin et al., 2008) for determining the celestial attitude (orientation) of
a stationary or slowly rotating camera in real time. Our algorithm utilizes several stages of geometric voting. In each stage, we
compare the angular separation for each pair of image stars with pairs of potential matches from a star catalog. If the separations
are similar, a vote is cast for each identification. Then, for each image star, catalog stars with a sufficiently high number of votes
are kept for the next stage. If only one such catalog star meets the criteria, it is determined to be the correct star. An image star with
no suitable candidates left is determined to be a false star. In our application the camera’s rotation may be significant, therefore we
also present our method of image processing taking into account smearing given an axis of rotation and angular velocity.
Keywords: star tracker, image processing, algorithm, atmospheric effects
1. Introduction marked by stars in the image and compared the grid with the
catalog in order to find a matching pattern (Padgett and Del-
Precise navigation of satellites and high-altitude balloons gado, 1997).
is not a trivial task. Traditionally, high quality gyroscopes Many other algorithms have been developed since. Here, we
are used for this purpose, but they are expensive, consume a adapt an algorithm originally developed by Kolomenkin et al.,
lot of power, are often too large for simple applications, and which utilizes a voting scheme for candidates from the catalog
drift over long periods of time (Gebre-Egziabher and Elakaim, for each image star (Kolomenkin et al., 2008).
2008). Accelerometers and magnetometers are often employed
in high-altitude balloons and aircraft, sometimes coupled to gy-
2. Catalog Generation
roscopes or GPS using a Kalman filter, but it is difficult to
achieve high precision using such systems (Gebre-Egziabher
The stars used for generating the catalog were obtained from
et al., 2002; Velk et al., 2005). One reason is disturbances to
the Yale Bright Star Catalog (BSC), which contains 9,110 stars
the Earth’s magnetic field due to factors like other objects near
along with J2000 coordinates, proper motions, and magnitudes.
the system (Velk et al., 2005).
It is more or less complete to magnitude 7 (Hoffleit and Warren,
One particular device that can be used instead or in con- 1991). Each time the catalog is generated, the date and time
junction with these other devices is a star tracker. This device are used to compute the Julian date, from which the effects of
records images of stars in a somewhat wide field of view, iden- precession and proper motion are calculated for each star to
tifies those stars based on the pattern they form, and determines determine its new location (Meeus, 1991). The catalog contains
its attitude using a star catalog stored in memory. Star track- the J2000 coordinates of the stars, and so 2000 is used as a base
ers have been practical since digital cameras and sufficiently year for the calculations.
sophisticated computers became available. In our implementation, we also make use of a “pair catalog,”
Soon after the invention of the CCD camera in 1969, Sa- which lists all pairs of these stars which are close enough to be
lomon of JPL developed the first star tracker in 1976 (Spatling seen in the same image. Each of these pairs is listed with its
and Mortari, 2009). Although some progress had been made in angular separation, which we will from now on refer to simply
the ten years or so following the conception of the star-tracker, as distance when it is clear what is meant from the context.
much faster and more robust methods were developed in the
1990s when computer science was developing very rapidly.
One group developed an algorithm based on a binary search tree 3. Star Detection
(Quine and Durrant-Whyte, 1996). In their algorithm, the star
positions were narrowed down by dividing the catalog into two In our implementation, an image is represented as a grid of
parts deducing where the stars were more likely to be found. integer values for each pixel. Initially, these values correspond
Another group developed an algorithm which produced a grid to photon count from the CCD camera.
Preprint submitted to New Astronomy February 27, 2013
Before a particular image is processed, we subtract from it a the pixel value. In this way, if a pixel at the edge of a star has a
“dark image” (an image taken with the shutter closed) to reduce large value, but not quite large enough to count as outstanding,
artifacts due to non-uniformity in the CCD. In practice, we use its greater-than-background value contributes to the centroid.
an average of 30 dark images for this purpose, in order to reduce The star is then given image coordinates at the centroid, and
the effects of noise. these coordinates are converted into Cartesian coordinates on
We begin our algorithm by marking pixels that we shall call the sphere, taking into account a simple radial distortion model.
outstanding pixels. Those pixels stand out in brightness and are A more robust distortion model is developed in Appendix A.
likely to result from stars. Initially, we assumed a normal dis- At this point the reader may be wondering why do not per-
tribution of pixel values. This worked well enough but suffered form aperture photometry (that is, subtracting the average back-
from the fact that a typical working image will not obey a nor- ground noise taken from an annulus centered about the star),
mal distribution. Instead, they will be mostly dark with some as in other implementations, e.g. Kolomenkin et al. (2008).
brighter spots where stars are found. Therefore, we opted for The reason is that, in our implementation, some stars appear
an implementation using robust statistics, using the median and highly distorted and asymmetrical due to lens distortion, espe-
quartiles. This method turned out to be more reliable but not cially when they appear near the edges of the image, so aperture
quite as efficient. photometry does not work well. Thus, the algorithm developed
In our algorithm, the image is partitioned into 64 × 64 pixel here offers greater versatility.
blocks. The median, interquartile range (IQR), and quartiles Later, we will require an estimate on the error in centroiding.
(Qn ) are determined for each block. For a pixel P in a given To that end, we first determine the approximate pixel radius ri
block to be considered outstanding, its value must be greater of each image star based on the number of outstanding pixels
than the third quartile plus some constant times the IQR, that contained within it. From this we estimate the angular radius
is, φi , the angular distance corresponding to the radius. Now, the
P > Q3 + α × IQR, (1) error i should depend on φi . Due to lens aberration, i may
where α in general depends on the hardware and implementa- also depend on the distance from the center of the image. In
tion. In our implementation, we set α = 1. our implementation, we take
Next, we search through the list of outstanding pixels and 1 1
use a recursive method to check if there are sufficiently many i ∝ ∝ √ . (3)
S NR φi
clustered together to form a star. The algorithm iterates over
outstanding pixels P1 , P2 , , PN which all begin “unmarked.” This encodes the assumption that a brighter star will be cen-
The marking business which follows is done in order to keep troided more accurately than a dim, smaller-looking star.
track of which pixels are determined to be part of a star in the For more general applications, there is a formula for the
image. Now, an unmarked pixel Pi will be marked if the average lower bound on the error, assuming the point source
p function
value P0,i of the 3 × 3 pixel square centered at Pi has a value (PSF) distribution is Gaussian This is given by σ/ N ph , where
greater than Q3 plus some constant β < α times the IQR. That σ is the standard deviation of the the PSF and N ph is the total
is, photon count for the star (Thomas, 2004). In practice, one can
P0,i > Q3 + β × IQR. (2) use the distance from the centroid of each photon received to
We use average value around Pi to avoid false stars due to hot approximate σ.
pixels and noise. To keep track of how the pixels are clustered,
we also add Pi to a new group and query the eight pixels ad- 5. Star Identification
jacent to it (including diagonally-adjacent pixels) to determine
if any of them is outstanding and unmarked. If so, that pixel is Here we adopt a multi-stage variation of the “lost-in-space”
added to the newly created group, and its neighbors are recur- algorithm developed by Kolomenkin et al. (Kolomenkin et al.,
sively polled in the same fashion. In our implementation, we 2008) in order to identify stars in the image with stars in the
set β = 0.6. Note that our typical images have a signal-to-noise catalog. We let n denote the number of stars detected in the
ratio of about 10. image. In our implementation, if n is larger than some value n0
For each grouping of outstanding pixels found, we throw out (we used 20), we throw out the dimmest stars and keep only the
any which contain less than some predetermined number of pix- n0 brightest stars in order to reduce runtime. This has the added
els (5 in our implementation). Otherwise, the grouping is deter- benefit of getting rid of potentially false stars.
mined to represent a star. In the first stage, we start a list for each image star S I,i of
votes for candidate catalog stars. Of course, this list is initially
4. Star Centroiding and Error Determination empty. For each pair PI,i j of distinct image stars S I,i and S I, j ,
we introduce the combined error δi j which is the sum of the
The next step is to find the centroid of each determined star, errors i and j . Then, using the sorted catalog we may quickly
i.e. its center weighted by brightness. To do this, we take the find all pairs PC,rs of catalog stars S C,r and S C,s with distance
weighted average of the image coordinates of the pixels in the DC,rs in the range
star and all pixels adjacent to them (including those not deter-
mined to be outstanding), where the weight is determined by |DC,rs − DI,i j | < δi j . (4)
2
For simplicity, we do this using a binary search in O(log n), but
it could also be done in O(log 1) using the k-vector technique
developed in Mortari and Neta (2000).
Next, we add a vote for each catalog star in the pairs PC,rs to
the candidate list for each star in the image star pair PI,i j . To
clarify, a total of four votes is cast for each pair of catalog stars.
Q P
Once this is done for each pair of image stars, we examine the
list of votes each image star. We wish to reduce the number of O
possibilities, so we keep the catalog stars with the top N most C
votes as our list of candidates for the identity of the image star.
In our implementation, N = 5. This completes the first stage.
In the second stage, once the list of candidates is generated,
the original algorithm is repeated but instead of comparing
each image pair to all pairs in the catalog, we only consider cat-
alog pairs formed from the image stars’ respective candidates.
The same voting scheme is now used to produce a new list of
candidates. Figure 1: For a rectilinear lens, the projection from the plane to the sphere is
used to obtain the spherical coordinates of stars. The resulting coordinates of
In the third and final stage, we make one slight change. Be-
the stars should differ from the stars’ sky coordinates by a rotation.
fore repeating the algorithm, if there were initially an abun-
dance of stars in the image (i.e. n > n0 ), we keep only the
candidates with the highest number of votes; in particular, we to be parallel to the y-axis of the space. To obtain the spherical
keep only the candidates with greater than 3/4 of the highest coordinates from the planar coordinates, one would construct
number of votes on the list for that image star. If there were a line L through the center of the sphere O = (0, 0, 0) and the
few stars in the image (n ≤ n0 ) then only the candidate with the point on the image P, and find the intersection Q of this line and
most votes is kept. This distinction is meant mostly to verify the sphere (on the same side of the sphere as the plane). This
the identifications bubbling up to the top in the previous two process is described in Appendix B.
stages and works quite well. After that the voting scheme is Next, a rotation was found to transform the image coordi-
used again to produce a final list of candidates. At this point, nate system closely to the sky coordinate system, as used by
if an image star has no candidates it is determined to be a false the catalog. More precisely, if r1 , r2 , . . . , rn are the coordinates
star and is removed from the list of image stars. Otherwise, the of the image stars and r10 , r20 , . . . , rn0 are the coordinates of their
candidate with the most votes is assumed to be the identity of respective catalog stars, a rotation R is found such that
the image star. X
ri0 · R(ri ) (5)
In addition, we implemented a tracking mode where, once
i
the system has acquired an attitude, it uses this prior data to
improve the efficiency of further attitude acquisition. This is is maximized. This process can be done using a quaternion-
done by limiting the catalog stars used to those within a circle based least squares method as in (Horn, 1987). Once we know
centered on the previously acquired attitude, of radius ρ = γδ, the rotation matrix R, we apply it to C which gives us the coor-
where δ is the angular distance of the diagonal of the camera dinates for the center of the image in the sky coordinate system.
and γ ≥ 1 is some multiplicative factor. In our implementation, We can then easily compute the RA and declination of the cen-
γ = 1.5. If the system cannot acquire a new attitude in some set ter of the image using basic trigonometry.
period of time or interval of attempts, the prior data is dumped
and the algorithm reverts to using the entire catalog. 7. Anti-Aliasing
7.1. Transformation for Calculating Streaks
6. Coordinate Determination of the Image Center We assume the lens is a rectilinear projector from a (unit)
sphere representing the photographed sky to a plane represent-
This section describes how the right ascension (RA) and dec- ing the image. In order to determine the streak properties, we
lination (DEC) of the center of the image are obtained, once the will compute the relationship between corresponding points in
image stars have been identified. the image coordinates and points on the sphere.
First, we establish a spherical coordinate system aligned at Suppose the camera is aligned such that the axis of rotation
the center of the image and determine the coordinates in this (here the z-axis in space) is perpendicular to the image’s x-axis,
system for each of the image stars. Since we use a rectilinear and the camera does not point near the poles of the axis of ro-
lens, we assume the coordinates of the stars on the image and tation. Then the streaks will be dominantly pointed in the hori-
on the sphere are related by a gnomonic projection (see Figure zontal direction in the image.
1). In this projection, we think of the planar image as lying Call the coordinates on the image axis X and Y and the space
tangent to the sphere, where the point of tangency is the center coordinates x, y, and z. Call the angle from the center of the im-
of the image C = (1, 0, 0). We then take the x-axis of the image age to the axis of rotation α. For a particular point P(X, Y) on
3
which simplifies to
x sin α + z cos α = 1. (15)
4
Figure 3: (a) RA local hour angle vs time, with moving average fit. (b) Declination vs time, with moving average fit. (c) The moving average residual of the RA
LHA vs time. (d) The moving average residual of the declination vs time.
by the star tracker. If the camera is tilted (roll) from its posi- of the errors
tion assumed in Section 7, the image coordinates can be rotated
1,i = δc,i − c0 + c1 δi + c2 λi (A.4)
before the technique described is applied. Future work should
focus on building deblurring algorithms with improvements to + c3 δi λi + c4 δi + c5 λi ,
2 2
60
40
30
20
ϕ 10
δ
0
θ
200 350 500 650 800 950 1100
λ
Wavelength (nm)
Figure C.6: The quantum efficiency curve for the Sony ICX445AL image sen-
sor.
Figure B.5: The construction used to obtain the spherical coordinates of a par- cos λ cos δ = cos φ. (B.3)
ticular star.
A sign ambiguity results for λ, which can be resolved using the
sign of X.
and The coordinates of the star on the unit sphere in space could
then be determined by:
δ λ δλ δ λ
P P P P 2 P 2
N
δ δλ δλ δ δλ
P 2 P P 2 P 2 P 2
∗
x = cos δ cos λ, (B.4)
λ δλ δλ λ
P 2 P 2 P 2 P 2
∗ ∗
M = ,
∗ ∗ ∗ δλ
P 2 2
δλ
P 2
δλ
P 2
(A.9) y = cos δ sin λ, (B.5)
δ δλ z = sin δ.
P 4 P 2 2
∗ ∗ ∗ ∗ (B.6)
λ
P 4
∗ ∗ ∗ ∗ ∗