0% found this document useful (0 votes)
148 views8 pages

On The Practical Implementation of A Geometric Voting Algorithm For Star Trackers

Draft/prepublication submitted to New Astronomy. // February 27 2013 // Gil Tabak, Alexander B. Wickes, Philip Lubin

Uploaded by

abwickes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
148 views8 pages

On The Practical Implementation of A Geometric Voting Algorithm For Star Trackers

Draft/prepublication submitted to New Astronomy. // February 27 2013 // Gil Tabak, Alexander B. Wickes, Philip Lubin

Uploaded by

abwickes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

On the Practical Implementation of a Geometric Voting Algorithm for Star Trackers

Gil Tabaka,b , Alexander Wickesa,c , Philip Lubina,d


a Dept. of Physics, University of California, Santa Barbara, CA 93106
b
email: [email protected]
c
email: [email protected]
d
email: [email protected]

Abstract
We present our implementation of an algorithm (Kolomenkin et al., 2008) for determining the celestial attitude (orientation) of
a stationary or slowly rotating camera in real time. Our algorithm utilizes several stages of geometric voting. In each stage, we
compare the angular separation for each pair of image stars with pairs of potential matches from a star catalog. If the separations
are similar, a vote is cast for each identification. Then, for each image star, catalog stars with a sufficiently high number of votes
are kept for the next stage. If only one such catalog star meets the criteria, it is determined to be the correct star. An image star with
no suitable candidates left is determined to be a false star. In our application the camera’s rotation may be significant, therefore we
also present our method of image processing taking into account smearing given an axis of rotation and angular velocity.
Keywords: star tracker, image processing, algorithm, atmospheric effects

1. Introduction marked by stars in the image and compared the grid with the
catalog in order to find a matching pattern (Padgett and Del-
Precise navigation of satellites and high-altitude balloons gado, 1997).
is not a trivial task. Traditionally, high quality gyroscopes Many other algorithms have been developed since. Here, we
are used for this purpose, but they are expensive, consume a adapt an algorithm originally developed by Kolomenkin et al.,
lot of power, are often too large for simple applications, and which utilizes a voting scheme for candidates from the catalog
drift over long periods of time (Gebre-Egziabher and Elakaim, for each image star (Kolomenkin et al., 2008).
2008). Accelerometers and magnetometers are often employed
in high-altitude balloons and aircraft, sometimes coupled to gy-
2. Catalog Generation
roscopes or GPS using a Kalman filter, but it is difficult to
achieve high precision using such systems (Gebre-Egziabher
The stars used for generating the catalog were obtained from
et al., 2002; Velk et al., 2005). One reason is disturbances to
the Yale Bright Star Catalog (BSC), which contains 9,110 stars
the Earth’s magnetic field due to factors like other objects near
along with J2000 coordinates, proper motions, and magnitudes.
the system (Velk et al., 2005).
It is more or less complete to magnitude 7 (Hoffleit and Warren,
One particular device that can be used instead or in con- 1991). Each time the catalog is generated, the date and time
junction with these other devices is a star tracker. This device are used to compute the Julian date, from which the effects of
records images of stars in a somewhat wide field of view, iden- precession and proper motion are calculated for each star to
tifies those stars based on the pattern they form, and determines determine its new location (Meeus, 1991). The catalog contains
its attitude using a star catalog stored in memory. Star track- the J2000 coordinates of the stars, and so 2000 is used as a base
ers have been practical since digital cameras and sufficiently year for the calculations.
sophisticated computers became available. In our implementation, we also make use of a “pair catalog,”
Soon after the invention of the CCD camera in 1969, Sa- which lists all pairs of these stars which are close enough to be
lomon of JPL developed the first star tracker in 1976 (Spatling seen in the same image. Each of these pairs is listed with its
and Mortari, 2009). Although some progress had been made in angular separation, which we will from now on refer to simply
the ten years or so following the conception of the star-tracker, as distance when it is clear what is meant from the context.
much faster and more robust methods were developed in the
1990s when computer science was developing very rapidly.
One group developed an algorithm based on a binary search tree 3. Star Detection
(Quine and Durrant-Whyte, 1996). In their algorithm, the star
positions were narrowed down by dividing the catalog into two In our implementation, an image is represented as a grid of
parts deducing where the stars were more likely to be found. integer values for each pixel. Initially, these values correspond
Another group developed an algorithm which produced a grid to photon count from the CCD camera.
Preprint submitted to New Astronomy February 27, 2013
Before a particular image is processed, we subtract from it a the pixel value. In this way, if a pixel at the edge of a star has a
“dark image” (an image taken with the shutter closed) to reduce large value, but not quite large enough to count as outstanding,
artifacts due to non-uniformity in the CCD. In practice, we use its greater-than-background value contributes to the centroid.
an average of 30 dark images for this purpose, in order to reduce The star is then given image coordinates at the centroid, and
the effects of noise. these coordinates are converted into Cartesian coordinates on
We begin our algorithm by marking pixels that we shall call the sphere, taking into account a simple radial distortion model.
outstanding pixels. Those pixels stand out in brightness and are A more robust distortion model is developed in Appendix A.
likely to result from stars. Initially, we assumed a normal dis- At this point the reader may be wondering why do not per-
tribution of pixel values. This worked well enough but suffered form aperture photometry (that is, subtracting the average back-
from the fact that a typical working image will not obey a nor- ground noise taken from an annulus centered about the star),
mal distribution. Instead, they will be mostly dark with some as in other implementations, e.g. Kolomenkin et al. (2008).
brighter spots where stars are found. Therefore, we opted for The reason is that, in our implementation, some stars appear
an implementation using robust statistics, using the median and highly distorted and asymmetrical due to lens distortion, espe-
quartiles. This method turned out to be more reliable but not cially when they appear near the edges of the image, so aperture
quite as efficient. photometry does not work well. Thus, the algorithm developed
In our algorithm, the image is partitioned into 64 × 64 pixel here offers greater versatility.
blocks. The median, interquartile range (IQR), and quartiles Later, we will require an estimate on the error in centroiding.
(Qn ) are determined for each block. For a pixel P in a given To that end, we first determine the approximate pixel radius ri
block to be considered outstanding, its value must be greater of each image star based on the number of outstanding pixels
than the third quartile plus some constant times the IQR, that contained within it. From this we estimate the angular radius
is, φi , the angular distance corresponding to the radius. Now, the
P > Q3 + α × IQR, (1) error i should depend on φi . Due to lens aberration, i may
where α in general depends on the hardware and implementa- also depend on the distance from the center of the image. In
tion. In our implementation, we set α = 1. our implementation, we take
Next, we search through the list of outstanding pixels and 1 1
use a recursive method to check if there are sufficiently many i ∝ ∝ √ . (3)
S NR φi
clustered together to form a star. The algorithm iterates over
outstanding pixels P1 , P2 , , PN which all begin “unmarked.” This encodes the assumption that a brighter star will be cen-
The marking business which follows is done in order to keep troided more accurately than a dim, smaller-looking star.
track of which pixels are determined to be part of a star in the For more general applications, there is a formula for the
image. Now, an unmarked pixel Pi will be marked if the average lower bound on the error, assuming the point source
p function
value P0,i of the 3 × 3 pixel square centered at Pi has a value (PSF) distribution is Gaussian This is given by σ/ N ph , where
greater than Q3 plus some constant β < α times the IQR. That σ is the standard deviation of the the PSF and N ph is the total
is, photon count for the star (Thomas, 2004). In practice, one can
P0,i > Q3 + β × IQR. (2) use the distance from the centroid of each photon received to
We use average value around Pi to avoid false stars due to hot approximate σ.
pixels and noise. To keep track of how the pixels are clustered,
we also add Pi to a new group and query the eight pixels ad- 5. Star Identification
jacent to it (including diagonally-adjacent pixels) to determine
if any of them is outstanding and unmarked. If so, that pixel is Here we adopt a multi-stage variation of the “lost-in-space”
added to the newly created group, and its neighbors are recur- algorithm developed by Kolomenkin et al. (Kolomenkin et al.,
sively polled in the same fashion. In our implementation, we 2008) in order to identify stars in the image with stars in the
set β = 0.6. Note that our typical images have a signal-to-noise catalog. We let n denote the number of stars detected in the
ratio of about 10. image. In our implementation, if n is larger than some value n0
For each grouping of outstanding pixels found, we throw out (we used 20), we throw out the dimmest stars and keep only the
any which contain less than some predetermined number of pix- n0 brightest stars in order to reduce runtime. This has the added
els (5 in our implementation). Otherwise, the grouping is deter- benefit of getting rid of potentially false stars.
mined to represent a star. In the first stage, we start a list for each image star S I,i of
votes for candidate catalog stars. Of course, this list is initially
4. Star Centroiding and Error Determination empty. For each pair PI,i j of distinct image stars S I,i and S I, j ,
we introduce the combined error δi j which is the sum of the
The next step is to find the centroid of each determined star, errors i and  j . Then, using the sorted catalog we may quickly
i.e. its center weighted by brightness. To do this, we take the find all pairs PC,rs of catalog stars S C,r and S C,s with distance
weighted average of the image coordinates of the pixels in the DC,rs in the range
star and all pixels adjacent to them (including those not deter-
mined to be outstanding), where the weight is determined by |DC,rs − DI,i j | < δi j . (4)
2
For simplicity, we do this using a binary search in O(log n), but
it could also be done in O(log 1) using the k-vector technique
developed in Mortari and Neta (2000).
Next, we add a vote for each catalog star in the pairs PC,rs to
the candidate list for each star in the image star pair PI,i j . To
clarify, a total of four votes is cast for each pair of catalog stars.
Q P
Once this is done for each pair of image stars, we examine the
list of votes each image star. We wish to reduce the number of O
possibilities, so we keep the catalog stars with the top N most C
votes as our list of candidates for the identity of the image star.
In our implementation, N = 5. This completes the first stage.
In the second stage, once the list of candidates is generated,
the original algorithm is repeated but instead of comparing
each image pair to all pairs in the catalog, we only consider cat-
alog pairs formed from the image stars’ respective candidates.
The same voting scheme is now used to produce a new list of
candidates. Figure 1: For a rectilinear lens, the projection from the plane to the sphere is
used to obtain the spherical coordinates of stars. The resulting coordinates of
In the third and final stage, we make one slight change. Be-
the stars should differ from the stars’ sky coordinates by a rotation.
fore repeating the algorithm, if there were initially an abun-
dance of stars in the image (i.e. n > n0 ), we keep only the
candidates with the highest number of votes; in particular, we to be parallel to the y-axis of the space. To obtain the spherical
keep only the candidates with greater than 3/4 of the highest coordinates from the planar coordinates, one would construct
number of votes on the list for that image star. If there were a line L through the center of the sphere O = (0, 0, 0) and the
few stars in the image (n ≤ n0 ) then only the candidate with the point on the image P, and find the intersection Q of this line and
most votes is kept. This distinction is meant mostly to verify the sphere (on the same side of the sphere as the plane). This
the identifications bubbling up to the top in the previous two process is described in Appendix B.
stages and works quite well. After that the voting scheme is Next, a rotation was found to transform the image coordi-
used again to produce a final list of candidates. At this point, nate system closely to the sky coordinate system, as used by
if an image star has no candidates it is determined to be a false the catalog. More precisely, if r1 , r2 , . . . , rn are the coordinates
star and is removed from the list of image stars. Otherwise, the of the image stars and r10 , r20 , . . . , rn0 are the coordinates of their
candidate with the most votes is assumed to be the identity of respective catalog stars, a rotation R is found such that
the image star. X
ri0 · R(ri ) (5)
In addition, we implemented a tracking mode where, once
i
the system has acquired an attitude, it uses this prior data to
improve the efficiency of further attitude acquisition. This is is maximized. This process can be done using a quaternion-
done by limiting the catalog stars used to those within a circle based least squares method as in (Horn, 1987). Once we know
centered on the previously acquired attitude, of radius ρ = γδ, the rotation matrix R, we apply it to C which gives us the coor-
where δ is the angular distance of the diagonal of the camera dinates for the center of the image in the sky coordinate system.
and γ ≥ 1 is some multiplicative factor. In our implementation, We can then easily compute the RA and declination of the cen-
γ = 1.5. If the system cannot acquire a new attitude in some set ter of the image using basic trigonometry.
period of time or interval of attempts, the prior data is dumped
and the algorithm reverts to using the entire catalog. 7. Anti-Aliasing
7.1. Transformation for Calculating Streaks
6. Coordinate Determination of the Image Center We assume the lens is a rectilinear projector from a (unit)
sphere representing the photographed sky to a plane represent-
This section describes how the right ascension (RA) and dec- ing the image. In order to determine the streak properties, we
lination (DEC) of the center of the image are obtained, once the will compute the relationship between corresponding points in
image stars have been identified. the image coordinates and points on the sphere.
First, we establish a spherical coordinate system aligned at Suppose the camera is aligned such that the axis of rotation
the center of the image and determine the coordinates in this (here the z-axis in space) is perpendicular to the image’s x-axis,
system for each of the image stars. Since we use a rectilinear and the camera does not point near the poles of the axis of ro-
lens, we assume the coordinates of the stars on the image and tation. Then the streaks will be dominantly pointed in the hori-
on the sphere are related by a gnomonic projection (see Figure zontal direction in the image.
1). In this projection, we think of the planar image as lying Call the coordinates on the image axis X and Y and the space
tangent to the sphere, where the point of tangency is the center coordinates x, y, and z. Call the angle from the center of the im-
of the image C = (1, 0, 0). We then take the x-axis of the image age to the axis of rotation α. For a particular point P(X, Y) on
3
which simplifies to
x sin α + z cos α = 1. (15)

P The line from O to Q can be described parametrically as


X Q
C
V(t) = (x(t), y(t), z(t)) (16)
Y α
ϕ = (t sin φ cos θ, t sin φ sin θ, t cos φ). (17)
O To find the spatial coordinates of P, we find t so that P lies on
θ the image plane. We obtain
sin φ cos θ
x= , (18)
sin φ cos θ sin α + cos φ cos α
sin φ sin θ
y= , (19)
sin φ cos θ sin α + cos φ cos α
cos φ
z= . (20)
Figure 2: The construction used to transform from the image plane to the unit
sin φ cos θ sin α + cos φ cos α
sphere and vice versa. Next, we would like to find the corresponding image coordi-
nates of Q. We can obtain X immediately:
the image plane, one may calculate the coordinates of a corre- X = −Fy. (21)
sponding point on the sphere Q(θ, φ), and vice versa (see Figure
2). We may obtain Y using the Pythagorean theorem,
The vector from O to C is Y 2 = F 2 [(x − sin α)2 + (z − cos α)2 ]. (22)
(sin α, 0, cos α) (6) The signs of Y will agree with that of z − cos α.
and the vector from C to P is 7.2. Streak Properties
 Y X Y  We have obtained both the transformation the image to the
− cos α, − , sin α . (7) sphere and vice versa. One way to determine the needed prop-
F F F
erties of the streaks follows: For each image pixel, find the cor-
Therefore, the vector V from O to P is responding coordinates (θ, φ). Assuming the angular velocity
 Y X Y  is constant and known, one may vary θ by ∆θ, a small change
V = sin α − cos α, − , cos α + sin α . (8) anticipated as a function of time, to obtain (θ + ∆θ, φ). Trans-
F F F
forming back to the image coordinates, one obtains the mod-
We see ified coordinates (X p , Y p ), from which the slope of the streak
V · (0, 0, 1)
cos φ = (9) and its length can be calculated. If a more exact prediction of
|V|
the streak is needed, such as when the streak is longer or more
so that curved, several such linearizations may be used along a partic-
F cos α + Y sin α
!
φ = cos √ −1
. (10) ular streak.
X2 + Y 2 + F 2 Furthermore, since the streaks will not be exactly aligned
To find θ, we project V onto the xy-plane to obtain the vector with the rows of pixels in the image, it is necessary to use
  an anti-aliasing algorithm to weigh the pixels used along the
U = sin α − FY cos α, − FX , 0 . (11) streak.
A quick way to do this is using Xiaolin Wu’s line algorithm,
Now, which is used to draw anti-aliased lines (Wu, 1991). The al-
U gorithm is generally used to determine the value of each par-
cos θ = · (1, 0, 0) (12) ticular pixel to be drawn, but here this value would represent
|U|
F sin α − Y cos α the weight associated with a particular pixel along a streak (see
= p . (13) Figure 4).
X + (F sin α − Y cos α)2
2

We note there is a sign ambiguity in θ, which can be resolved


by using the same sign as for θ as −X.
To find the transformation from Q to P instead, we first find
the intersection of the line from O to Q and the image plane.
The equation of the plane is determined by

(x − sin α) sin α + (z − cos α) cos α = 0, (14) Figure 4: A demonstration of a line-drawing algorithm.

4
Figure 3: (a) RA local hour angle vs time, with moving average fit. (b) Declination vs time, with moving average fit. (c) The moving average residual of the RA
LHA vs time. (d) The moving average residual of the declination vs time.

7.3. De-Blurring zenith as possible), connected it to a laptop to record data, and


We have discussed a practical method to compute how the let it run uninterrupted throughout the night. Under reasonable
streaks should appear in the image, to first order. Now we dis- conditions (clear sky, stars visible, lack of moisture on the lens,
cuss one possible way to use this information to increase the etc.), our algorithm was highly reliable. Coordinates were al-
signal to noise ratio in the image, which will improve the per- most always reported accurately, so instead we focus here on
formance of the algorithm. failure rate and precision.
When a star is streaked, the signal to noise ratio will be re- By failure rate, we mean the percentage of times the algo-
duced by the length of a streak (say n pixels). We assume the rithm fails to obtain any attitude whatsoever, given reasonable
readout noise in the image is σ. A simple algorithm conditions as explained above. We could also factor in outliers,
√ may be but so far even our outliers do not stray far from expected val-
used to alleviate the reduction of the SNR by up to n: simply
add pixel values along paths where streaks are expected. This ues, so we do not include them as failures. In any case, our av-
is the same procedure that would streak an unstreaked image. erage failure rate was .25%. These failures could be attributed
Suppose the average (streaked) image pixel has value µ s . If to clouds, fog, temperature changes, and vibrations in the cam-
n pixels added along the streak of a star, the newly formed era platform, all of which would affect the hardware in such
pixel value will be nµ s . However, the new error value will be a way as to make it nigh impossible to perform real-time star
√ √ tracking. However, these failures could easily be alleviated by
σ2 + . . . + σ2 = σ n, since variance is additive for pixels
whose readout is independent. Thus, the signal to noise ratio extrapolating the current location based on prior data, e.g. using
has improved by a Kalman filter, as is commonly done in real-time applications.
The data sets considered below were taken on the 26th of
µs n µs √ May, 2011, with an exposure/integration time of 200 ms.
√ ÷ = n. (23)
σ n σ Now, theoretically, our setup should result in an RA which
changes linearly (modulo 360◦ ), and a constant declination.
8. Results This behavior is observed in practice, up to what appears to be
random error (see Figures 3a and 3b) which will be discussed
In our implementation, we utilized all parts of the algorithm momentarily. Since the RA data is very linear per se, its data
presented except anti-aliasing. Furthermore, we opted to avoid plot conveys little useful information. Hence, we instead plot
the use of extrapolation or filtering algorithms, in order to test the local hour angle LHA ≡ LS T − RA, modulo a constant
the reliability of the algorithm. We used a Mightex CCD chip (where LS T is the local sidereal time). Ideally, both the decli-
(quantum efficiency curve given in Appendix C) with a pixel nation and LHA should be close to constant – yet they appear
resolution of 1280×960 (pixel size 3.75 × 3.75 µm) and a Com- to slowly drift randomly. Thus, we chose to plot the declination
putar 16mm f/1.4 lens, yielding 48.05 arcsec/pixel for a field of and LHA using a moving average fit rather than a linear fit to
view of about 17.1◦ × 12.8◦ . In order to gather our data, we find the acquired precision (see Figures 3c and 3d).
set up our CCD camera on a tripod (pointing as nearly at the Comparing Figures 3a and 3b, it appears that both the RA
5
and declination experience some amount of error. This error will occur because of the Earth’s rotation). Thus, the corrected
can be partly written off as random error, resulting from subtle coordinates can be predicted, and the constants of the model
environmental changes as described previously regarding fail- can be found thereafter.
ure rate. However, the RA data appears to jitter up and down In order to predict the corrected coordinates for each of the
rapidly whereas the declination does not. However, the ac- images, a rotation T t (depending on the time t) can be applied to
quired RA appears to jitter up and down rapidly whereas the each of the stars in all of the images, such that the rotation T t Rt
declination does not. This is likely due to the fact that, for a will bring the center of each image in the image coordinates to
stationary camera, the motion of the stars is always in the di- almost the same coordinates r0,i , where Rt is the rotation from
rection of the RA, and so as stars travel in this direction the lens the image coordinates to the sky coordinates discussed in the
distortion biases the acquired RA. methods Section 7 (also depends on the time t). T t is predicted
The typical random error can be estimated by computing the by rotating about the Earth’s axis (the z-axis in the sky coordi-
RMS of the moving average residuals. This results in an error nates). It is important to keep track of clock drift, as precise tim-
of 6.3 arcseconds (.13 pixels) for the RA and 4.4 arcseconds ing is extremely important in predicting T t . Also, in calculating
(.09 pixels) for the declination. It is more difficult to precisely this rotation, one should make sure to convert from time to ra-
determine the accuracy of our measurements, which will be af- dians using sidereal units. r0,i should be picked based on a time
fected by many systematic errors such as atmospheric aberra- near the average of the times of the images being used. Then,
tion. However, for many applications only high precision is all of the transformations T t Rt based on different times (but all
important, which we have evidently achieved. transforming from the same image coordinates to the r0,i coor-
dinates) can be averaged, which can be rotated by the inverse of
each T t to give the expected transformation from image to sky
9. Conclusion
coordinates for each image. Then, the expected coordinates of
Although we have not profoundly altered the star identifica- each star can be calculated using this transformation.
tion from the original voting algorithm by Kolomenkin, et al., In the technique presented, where δ and λ are calculated as in
we have adapted it to work more robustly by considering cat- section for each star (from measurements), the model assumes
alog stars with initially fewer votes, and using several stages that the corrected coordinates δc , λc are quadratic in δ and λ,
of voting. In the future, this algorithm may be enhanced by δc = c0 + c1 δ + c2 λ + c3 δλ + c4 δ2 + c5 λ2 , (A.1)
also considering the angles of the triangles formed by triplets
λc = d0 + d1 δ + d2 λ + d3 δλ + d4 δ + d5 λ . 2 2
(A.2)
of stars. This becomes practical in a secondary or tertiary vot-
ing stage when few candidates are present for each image star. We must find the constants c0 , . . . , c5 and d0 , . . . , d5 . To do so,
Moreover, the technique introduced in Section 7 could be we minimize the sums of the squares
used when the camera is rotating quickly with a known axis X X
of rotation. If not available from other sensors, this axis of ro- A= 1,i 2 , B = 2,i 2 (A.3)
tation can be determined using previous identifications made i i

by the star tracker. If the camera is tilted (roll) from its posi- of the errors
tion assumed in Section 7, the image coordinates can be rotated
1,i = δc,i − c0 + c1 δi + c2 λi (A.4)
before the technique described is applied. Future work should
focus on building deblurring algorithms with improvements to + c3 δi λi + c4 δi + c5 λi ,
2 2

signal-to-noise ratio. However, the transformations described 2,i = λc,i − d0 + d1 δi + d2 λi (A.5)


may still be useful for more advanced algorithms. In partic-
+ d3 δi λi + d4 δi + d5 λi .
2 2
ular, solving the inverse problem of the deblurring algorithm
described in Section 7 would restore the original image before The sums here are to be taken over all of the stars (N) in all of
blur was present. the images used. Taking the partial derivatives of A and B with
respect to each c0 , . . . , c5 and d0 , . . . , d5 , respectively, and set-
ting the result to 0 each time yields a system of linear equations
10. Acknowledgements
from which the constants may be found:
We would like to thank Gary Hughes for useful discussions. MC = K1 , MD = K2 (A.6)
This work is supported in part by the NASA California Space
Grant and by the Institute for Terahertz Science and Technology where
at UC Santa Barbara. C = (c0 , ..., c5 )> , D = (d0 , ..., d5 )> , (A.7)

 P δc   P λc 


 P   P 
Appendix A. Distortion Model
 δc δ   λc δ 
 δ λ   λ λ 
 P   P 
One way to construct a distortion model for the lens to im-
K1 = P c  , K2 = P c  , (A.8)
prove measurements once a functional star tracker exists is to  P δc δλ  P λc δλ
 δc δ2   λc δ2 
use a least squares regression on a large number of images taken
δc λ λc λ
P 2  P 2 
without moving the device (i.e. the differences in the images
6
70

60

Quantum Efficiency (%)


50

40

30

20
ϕ 10
δ
0
θ
200 350 500 650 800 950 1100
λ
Wavelength (nm)

Figure C.6: The quantum efficiency curve for the Sony ICX445AL image sen-
sor.

and the spherical law of cosines (Taff, 1991).

Figure B.5: The construction used to obtain the spherical coordinates of a par- cos λ cos δ = cos φ. (B.3)
ticular star.
A sign ambiguity results for λ, which can be resolved using the
sign of X.
and The coordinates of the star on the unit sphere in space could
then be determined by:
δ λ δλ δ λ
P P P P 2 P 2
N
 
 
δ δλ δλ δ δλ
P 2 P P 2 P 2 P 2

 ∗ 
 x = cos δ cos λ, (B.4)
λ δλ δλ λ
P 2 P 2 P 2 P 2
∗ ∗
M =   ,
 
 ∗ ∗ ∗ δλ
P 2 2
δλ
P 2
δλ
P 2
 (A.9) y = cos δ sin λ, (B.5)
δ δλ z = sin δ.
P 4 P 2 2 

 ∗ ∗ ∗ ∗  (B.6)
λ
P 4
∗ ∗ ∗ ∗ ∗

where ∗ denotes the symmetry of the matrix. Appendix C. Quantum Efficiency


This distortion model was not used in our implementation
The Mightex CCD camera used has a Sony ICX445AL im-
due to lack of an extremely accurate clock. This could be re-
age sensor chip. The quantum efficiency is plotted in Fig-
solved using a GPS-based clock, in addition to a highly stable
ure C.6 for reference, as provided by Point Grey Research
mount. This model can be extended to higher-order coefficients
TAN2008006.
if the need arises.

Appendix D. Focal Length Determination


Appendix B. Image Star Coordinates on the Celestial
Sphere One way to estimate the focal length in pixels is to use the
properties of the camera chip and lens. However, real images
Since the x-axis of the planar image was arbitrarily chosen can be used to make a better prediction, once identification
to be parallel to the y-axis in the three-dimensional space, two has been made for n stars in a particular image: consider the
angular coordinates were assigned to each image star, λ (lon- n(n − 1)/2 possible pairs of stars in the image. One way to
gitude) and δ (altitude). First, the angular distance from the determine the angular distance of such a pair is to use the cata-
center of the image φ, and the angle from the image’s x-axis loged coordinates of both stars in the pair, ri and r j :
θ were calculated (see Figure B.5). If R is the planar distance angular distance = cos−1 (ri · r j ). (D.1)
of the star from the center of the image, X and Y are the star’s
planar coordinates, and F is the focal length (considered to be Based on a particular focal length F, one can calculate the same
the radius of the celestial sphere), it follows that quantity from the measured coordinates (as in Section 7). The
error on this particular prediction ∆θi j (F) of the angular dis-
R
tan φ = . (B.1) tance is then found by subtracting from the formed prediction.
F Next, we minimize the quantity
A method for obtaining the correct value of F is given in Ap- X
E= (∆θi j (F))2 (D.2)
pendix D. i< j
In order to find δ we utilize the spherical law of sines
with respect to F using a recursive method (such as applying a
sin δ = sin θ sin φ, (B.2) binary search over a range reasonable for focal lengths). This
7
gives us the correct value of the focal length F, and the mean
square error can be obtained from E.
Note that this technique can also be applied once the device
is in usage, assuming the focal length is initially close enough
to make star identifications. This may be useful when the star
tracker is sent to environments with variable conditions, as the
focal length may vary with temperature. Furthermore, since this
technique could be used to find the focal length dynamically, it
could assist in refocusing the camera.
Gebre-Egziabher, D., Elakaim, G. H., 2008. Mav attitude determination by
vector matching. IEEE Transactions on Aerospace and Electronic Systems
44(3).
Gebre-Egziabher, D., Elakaim, G. H., Powell, J. D., Parkinson, B. W., 2002. A
gyro-free quaternion based attitude determination system suitable for imple-
mentation using low-cost sensors. Proceedings of the IEEE PLANS 2002,
San Diego, CA, 2002, 185–192.
Hoffleit, E. D., Warren, W. H., J., 1991. Yale bright star catalogue, 5th revised
edition. Yale University Observatory, New Haven.
Horn, B. K. P., 1987. Closed-form solution of absolute orientation using unit
quaternion. Optical Society of America 4, 629–642.
Kolomenkin, M., Pollak, S., Shimshoni, I., Lindenbaum, M., 2008. Geomet-
ric voting algorithm for star trackers. IEEE Transactions on Aerospace and
Electronic Systems 44(2), 441–456.
Meeus, J., 1991. Astronomical Algorithms. Richmond, Virginia: Willmann-
Bell.
Mortari, D., Neta, B., 2000. k-vector range searching techniques. Advances in
the Astronautical Sciences 105(1), 449–464.
Padgett, C., Delgado, K. K., 1997. A grid algorithm for autonomous star iden-
tification. IEEE Transactions on Aerospace and Electronic Systems 33(1),
202213.
Quine, B. M., Durrant-Whyte, H. F., 1996. A fast autonomous star-acquisition
algorithm for spacecraft. Control Engineering Practice 4(12), 1735–1740.
Spatling, B. B., Mortari, D., 2009. A survey on star identification algorithms.
Algorithms 2, 93–107.
Taff, L. G., 1991. Computational Spherical Astronomy. Malabar, Florida:
Krieger Publishing Company.
Thomas, S., 2004. Optimized centroid computing in a shack-hartmann sensor.
Advancements in Adaptive Optics 5490, 1238–1246.
URL http://dx.doi.org/10.1117/12.550055
Velk, J., Ripka, P., Kubk, J., Platil, A., KaÅpar, P., 2005. Amr navigation sys-
tems and methods of their calibration. Sensors and Actuators A.123-124,
122128.
Wu, X., 1991. An efficient antialiasing technique. Computer Graphics 25(4),
143–152.

You might also like