0% found this document useful (0 votes)
160 views5 pages

Center of Circle After Perspective Transformation

Video-based glint-free eye tracking commonly estimates gaze direction based on the pupil center. The boundary of the pupil is fitted with an ellipse and the euclidean center of the ellipse in the image is taken as the center of the pupil. However, the center of the pupil is generally not mapped to the center of the ellipse by the projective camera transformation. This error resulting from using a point that is not the true center of the pupil directly affects eye tracking accuracy. We investigat

Uploaded by

BrianChen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
160 views5 pages

Center of Circle After Perspective Transformation

Video-based glint-free eye tracking commonly estimates gaze direction based on the pupil center. The boundary of the pupil is fitted with an ellipse and the euclidean center of the ellipse in the image is taken as the center of the pupil. However, the center of the pupil is generally not mapped to the center of the ellipse by the projective camera transformation. This error resulting from using a point that is not the true center of the pupil directly affects eye tracking accuracy. We investigat

Uploaded by

BrianChen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Center of circle after perspective transformation*

Xi Wang† Albert Chern‡ Marc Alexa§


TU Berlin TU Berlin TU Berlin

A BSTRACT
Video-based glint-free eye tracking commonly estimates gaze direc-
tion based on the pupil center. The boundary of the pupil is fitted
with an ellipse and the euclidean center of the ellipse in the image is
taken as the center of the pupil. However, the center of the pupil is
generally not mapped to the center of the ellipse by the projective
camera transformation. This error resulting from using a point that is
arXiv:1902.04541v1 [cs.CV] 12 Feb 2019

not the true center of the pupil directly affects eye tracking accuracy.
We investigate the underlying geometric problem of determining
the center of a circular object based on its projective image. The
main idea is to exploit two concentric circles – in the application (a) (b)
scenario these are the pupil and the iris. We show that it is possible
to computed the center and the ratio of the radii from the mapped
concentric circles with a direct method that is fast and robust in
practice. We evaluate our method on synthetically generated data
and find that it improves systematically over using the center of the p r
fitted ellipse. Apart from applications of eye tracking we estimate
that our approach will be useful in other tracking applications. R

Index Terms: [Computing Methodologies]: Computer Vision—


Model development and analysis

1 I NTRODUCTION (c) (d)

It is becoming more and more common to include video cameras in Figure 1: A pair of ellipses (a) as the image of a pair of concentric
head mounted stereo displays to enable eye tracking in virtual reality circles (b) in perspective. Taking the data from a conventional ellipse
environments. In this setting, the optical axis of the eye is usually detection tool (shown in axes in (c)), a simple algorithm proposed in
determined based on the center of the pupil, which can be extracted this paper locates the true center p respecting the perspective, and
from the video images. Identification of the pupil in the image stream determines the radii ratio R/r of the inferred circles (d).
is commonly done by fitting an ellipse to the boundary between
the dark pupil and much lighter iris. This approach introduces two
sources of error: 1) the center of the ellipse in the image is not the however, all of them give rise to the same projective center, and also
center of the pupil, because the camera transformation is projective all of them agree on the ratio of the radii. In other words, based on
(and not just affine); 2) the refraction at the cornea distorts the the pair of ellipses, the center of the concentric circles and the ratio
pupil shape in addition to the projective camera transformation. The of the radii are uniquely determined.
problems from refraction can be circumvented in other situations The set of concentric circles is an instance of a pencil of conics.
by additional tracking equipment [7]. Without such equipment it One of the degenerated circles in the group corresponds to the true
may be possible to compensate the effect based on a computationally center and it is invariant under projective transformation. In Section 4
involved inverse model [6]. We focus on the first problem, namely we explain this concept and show how it leads to a simple formulation
the non-affine mapping of the center. To our knowledge, this is the for a computational approach.
first work providing a computational approach estimating the true We apply our method to estimate the true pupil center by using a
pupil center in the context of video-based eye tracking. pair of ellipses, i.e., the pupil ellipse and the iris ellipse. Tested with
The fact that the center of a circle is not mapped to the center of synthetic data, we show that our method provides robust estimation
an ellipse under projective transformations is illustrated in Figure 1: of pupil center with respect to projective distortion (see Section 5).
The pair of ellipses in (a) is the result of perspectively projecting Compared to the pupil center estimated from ellipse fitting our esti-
the two concentric circles in (b) to the image plane. The centers of mation shows significant improvement, less than one pixel distance
the two ellipses are not coincident (c), and neither coincides with to the true pupil center in most cases.
the projected center p of the concentric circles. As we explain later, Apart from eye tracking, our method can also be used for related
there are in fact many projective transformations that would map tasks in computer vision. The idea of exploring the projective
a pair of concentric circles to the two ellipses found in the image, invariant properties of concentric circles is not new. It has been used
for markers consisting of the two concentric circles specified by a
* Preliminary work.
† e-mail:[email protected]
ring [15] or localization in robotics applications [8]. Compared to the
‡ e-mail:[email protected]
iterative optimization techniques our formulation as an eigenproblem
§ e-mail:[email protected]
is more direct.

2 BACKGROUND AND R ELATED W ORK


The estimation of gaze direction in video-based eye tracking relies
on measuring the pupil center [7, 13]. Image analysis algorithms
are used to estimate the pupil center. The boundary of the pupil is Given two ellipses (or more generally, conics) there may be many
fitted into an ellipse, and its euclidean center is taken as the pupil projective transformations that map the ellipses into an inferred pair
center [10, 20]. The estimated pupil center is then mapped to gaze of concentric circles. Nevertheless, these possible transformations
direction through calibration, very often with the aid of glint, known will all agree on
as the corner reflection, reflected from a controlled light source.
However, most mobile eye tracking system are based on glint-free 1. a unique point p that is sent to the center of the target concentric
tracking systems [16]. circles; and
How to accurately estimate pupil center position is of major
concern in the eye tracking community. The main challenges are 2. the ratio R/r of the radius R of the larger circle to the radius r
risen from distortions in the captured images, namely the projective of the smaller circle.
distortion and the refractive distortion [6, 25]. Projective distortion
moves the projected pupil center away from the ellipse center as In other words, based on only two ellipses in an image, and without
shown in Figure 1. The refractive distortion is caused by the different any information on the projective transformation from the eye to
refractive index of cornea (n = 1.33, nair = 1.0 [2]), which leads to the image plane of the camera, one can identify the center of the
irregularly distorted pupil boundaries in the camera image. Such concentric circles in the image and compute the radii ratio of the
distortion depends on the camera viewing direction [9] even when circles.
the cornea surface is simplified to a perfect sphere. The analysis It turns out that finding the center point p and the radii ratio only
gets more complicated when the curvature of the cornea changes [3]. amounts to an eigenvalue problem of a 3 × 3 matrix, based on which
Glint-based eye tracking systems implicitly model the refraction we give a simple algorithm for the center retrieval problem.
distortion [11], for example by using the difference vector between
pupil center and corner reflection for gaze estimation. 3.1 Matrix Representation of Conics
Apart from the above described feature-based gaze estimation,
appearance-based gaze estimation methods rely on the detection of A conic, such as an ellipse, takes a general implicit form
eye features in the images [1, 19], such as the iris center and the eye
corners. These methods aim to track the eye movements for example Ax2 + Bxy +Cy2 + Dx + Ey + F = 0, (1)
with webcams. Image resolution in the eye region is limited and the
appearances of pupil and cornea are less distinguishable. Very often which can be expressed in the following matrix form:
full faces are visible in the images, therefore, facial landmarks are   
used as additional information in deep learning methods [18, 26].   A B/2 D/2 x
However, simultaneous detection of both pupil and iris is difficult x y 1  B/2 C E/2  y  = 0,
in both feature-based methods and appearance-based methods, and D/2 E/2 F 1
it inevitably requires many empirical parameter settings [12, 23]. | {z }
Robust detection of both pupil and iris remains as a challenging Q
problem, especially that irises are partially occluded by eyelids.
Additionally, model-based gaze estimations have been proposed where the symmetric matrix Q is called the matrix representation
where multiple eye images are used to optimize a three-dimensional of the conic, and e v = (x, y, 1)| is a vector in homogeneous coordi-
eye ball model [22]. Recently a model-fitting approach accounting nates. Note that the rescaled matrix Q 7→ αQ, where α is a nonzero
for cornea refraction has been proposed [6]. In principle, these model scalar, defines the same conic. This means the matrix representation
fitting methods are based on ray tracing, which is a considerably is a homogeneous coordinate of the conic. Each conic uniquely
expensive procedure. It results in a non-linear minimization problem corresponds to such a homogeneous matrix representation.
which requires iterative solving procedures. For an ellipse with geometric parameters c = (cx , cy ) (geometric
We propose a simple method utilizing the underlying geometric centroid), a (major semiaxis), b (minor semiaxis) with a rotation
properties of two concentric circles, the pupil and the iris. Our angle θ , the coefficients in (1) are given by
method directly computes the true pupil center in the camera image
as well as the projective invariant radii ratio. In general, large distance A = a2 sin2 θ + b2 cos2 θ , B = 2(b2 − a2 ) cos θ sin θ ,
to the camera image center leads to large projective distortion. In the C = a2 cos2 θ + b2 sin2 θ , D = −2Acx − Bcy ,
context of eye tracking, this means larger pupils are more seriously
distorted. Recent study [14] shows that changes in pupil sizes can E = −Bcx − 2Ccy , F = Ac2x + Bcx cy +Cc2y − a2 b2 .
lead to sever accuracy drops in eye tracking. Pupil size has been used
to study observer’s mental cognitive load [17] as well as fatigues 3.2 Algorithm
when using VR headsets [24]. Our method can also be used to
accurately estimate the pupil size (in these applications). Let Q1 , Q2 be two detected ellipses, represented in matrix form, and
assume they are the result of an unknown projective transformation
of a pair of concentric circles. Then the following algorithm finds
3 C ONCENTRIC C IRCLES IN P ERSPECTIVE
the center p and the radii ratio R/r for the inferred pair of concentric
Given an ellipse that is the result of the projective transformation circles.
of a circle, is it possible to identify the projected center of the
circle? The answer to this question is no. There is an infinite set of Algorithm 1 Concentric circles in perspective
projective transformations that map a circle to the observed ellipse,
while sending the center to an arbitrary points. Therefore, given Input: Q1 , Q2 . Two ellipses as 3 × 3 symmetric matrices.
only an ellipse from a camera view without any information of the 1: A := Q2 Q−1 1 .
perspective, one cannot retrieve the image of the center of the original 2: (λi , ui )3i=1 ← the eigenvalue and eigenvector pairs of A, with
circle. This center retrieval problem, however, becomes drastically the observation that λ1 ≈ λ2 and λ3 distinguished.
different when the given image contains two ellipses coming from a e = ( pex , pey , pez )| := Q−1
3: p 1 u3 .
pair of concentric circles. In the following we introduce the main 4: p := ( pex/pez , pey/pez ). p
observations and the resulting algorithm – the following section Output: p and R/r := λ2 λ3/λ12 .
provides the mathematical justification.
4 M ATHEMATICAL J USTIFICATION 4.2 Projective Transforms of Concentric Circles
The rather straightforward algorithm Alg. 1 is derived based on From Sec. 4.1.6 we conclude that two real conics Q1 , Q2 can be
projective geometry. In this section we provide the necessary back- projectively transformed into a pair of concentric circles if and only
ground about conics in projective planes, and derive the formulae in if Q1 and Q2 touch at two complex points.
Alg. 1. The statement follows from that under any real projective trans-
formation, the touching points (1, ±i, 0)| of any pair of concentric
4.1 Special Sets of Conics circles are transformed to some other pair of complex conjugated
In the following we provide a number of basic relations between the points, and at the same time the incidence relations—such as the no-
algebraic and the geometric aspects of conics. These notions allow tion of touching—are preserved. Conversely, if two real conics touch
us to characterize projective transforms of concentric circles. at two complex points, then these two touching points must be the
complex conjugate of each other (since the conics are real). Hence
4.1.1 Degenerate Conics there exists real projective transformations sending the two touching
A conic, represented as a 3 × 3 symmetric matrix Q, is degenerate if points to (1, ±i, 0)| . Such projective transformations effectively map
det(Q) = 0. What this means geometrically is that the conic becomes the two conics into a pair of concentric circles.
one point, one line, a union of two lines, or a union of two complex
conjugate lines whose intersection is a real point. 4.2.1 Finding the Center
Suppose Q1 and Q2 are projective transforms of a pair of concentric
4.1.2 Projective Transformations circles. Then finding their common center amounts to seeking a
A projective transformation deforms a conic Q into another conic degenerate conic in the pencil of conics spanned by Q1 and Q2 .
with matrix L−| QL−1 , where L is some general invertible 3 × 3 Using the parametric equation (2) for the pencil, we solve
matrix representing the non-degenerate projective transformation in
homogeneous coordinates. det (−λ Q1 + Q2 ) = 0. (3)

4.1.3 Intersections of Conics Assuming det(Q1 ) 6= 0 we rewrite (3) as


Two generic conics Q1 , Q2 can have four, two, or zero real intersec-  
tion points. When the number of intersection points is less than four, det −λ I + Q2 Q−11 = 0,
what happens is that the missing intersections become imaginary. If
one allows x, y to be complex, then there are always four intersection which is an eigenvalue problem for
points, counted with multiplicity.
A := Q2 Q−1
1 .
4.1.4 Pencil of Conics
The three eigenvalues λ1 , λ2 , λ3 correspond to three degenerate
Given two generic conics Q1 , Q2 , one can construct a family of
conics in the pencil. Two of the degenerate conics coincide (λ1 = λ2 ),
conics through linear combinations
being the the single line joining the two touching points of Q1 , Q2 .
Q(α,β ) := αQ1 + β Q2 , α, β ∈ C. The other distinguished degenerate conic is a real point p (together
with two complex conjugate lines) representing the center we look
This family of conics is called a pencil of conics. Geometrically, for.
the pencil of conics consists of all conics passing through the four The point p in the degenerate conic Q(λ3 ) = −λ3 Q1 + Q2 can be
fixed (possibly complex) intersection points of Q1 and Q2 . Since a found by solving Q(λ3 ) p = 0:
rescaling of the matrix Q(α,β ) results in the same conic, one may
use only one parameter λ replacing (α, β ) to parametrize the pencil (−λ3 Q1 + Q2 )p = 0 =⇒ AQ1 p = λ3 Q1 p.
of conics:
That is, p = Q−1
1 u3 where u3 is the eigenvector Au3 = λ3 u3 associ-
Q(λ ) := −λ Q1 + Q2 , λ ∈ C. (2) ated with the eigenvalue λ3 .
The minus sign here is for later convenience. 4.2.2 Radii Ratio
4.1.5 Circles A projective transformation (Sec. 4.1.2) Q1 7→ L−| Q1 L−1 , Q2 7→
L−| Q2 L−1 yields
A conic Q is a circle if A = C and B = 0 in (1). This condition is
equivalent to equation e v| Qev = 0 admitting two special solutions A 7→ (L−| Q2 L−1 )(L−| Q1 L−1 )−1
|
v = (1, i, 0) and e v = (1, −i, 0)| . The two complex points at in-
= L−| Q2 Q−1 | −| |
e
finity (1, ±i, 0)| are called the circular points. (Here, “point at 1 L = L AL ,
infinity” refers to the vanishing 3rd component in the homogeneous
which leaves the eigenvalues of A invariant. Therefore, the ratios
coordinate.)
of eigenvalues λ1 : λ2 : λ3 are invariant quantities under projective
4.1.6 Concentric Circles transformations. Here we only consider the ratios since the matrices
Q1 , Q2 , A are defined only up to a scale.
Two conics Q1 , Q2 are concentric circles if they are not only circles
For Q1 , Q2 that are projective transforms of concentric cir-
(passing through the circular points (1, ±i, 0)| ) but also that Q1 , Q2
cles, consider a projective transformation Q01 = L−| Q1 L−1 , Q02 =
intersect only at the circular points with multiplicity two. In other
words, Q1 , Q2 touch at the circular points. The pencil of conics L−| Q2 L−1 so that Q01 and Q02 are concentric circles with radii r and
spanned by a pair of concentric circles Q1 , Q2 consists of all circles R respectively. Since the radii are invariant under translations, we
concentric to Q1 and Q2 . In this pencil of concentric circles there may assume that the concentric circles are centered at the origin
are several degenerate conics. One of them corresponds to the circle without loss of generality. Then we have
collapsed to the center point. This center point is a degenerate conic    
1 0 0 1 0 0
as the two complex conjugate lines joining the center point to each 0 0
circular points. Another degenerate concentric circle is the single Q1 = 0 1 0  , Q2 = 0 1 0 
real line connecting the two circular points. 0 0 −r2 0 0 −R2
2.00 2.00
Our estimation Our estimation
and 1.75 Euclidean center 1.75 Euclidean center
1.50 Image fitting 1.50 Image fitting
 
1 0 0 1.25 1.25

error (pixels)

error (pixels)
A0 = L−| AL| = Q02 Q0−1
1 = 0
 1 0 . 1.00 1.00
0.75 0.75
0 0 R2/r2
0.50 0.50
0.25 0.25
Thus the invariant eigenvalue ratio of A is given by λ1 : λ2 : λ3 = 0.00
30 40 50 60 70
0.00
30 40 50 60 70
1 : 1 : R2/r2 . Therefore, the radii ratio R/r is encoded in the eigenvalues theta (degree) theta (degree)
λ1 = λ2 , λ3 of A. (a) (b)
In practice, when the two conics Q1 , Q2 are detected with mea- 2.00 2.00
Our estimation Our estimation
surement error, the spectrum of A may only have an approximated 1.75 Euclidean center 1.75 Euclidean center
1.50 Image fitting 1.50 Image fitting
double λ1 ≈ λ2 , λ3 . In that case we retrieve the radii ratios of the
1.25 1.25

error (pixels)

error (pixels)
concentric circles by one of the following symmetrizations 1.00 1.00
s s s 0.75 0.75
λ32 0.50 0.50
R λ2 λ3 λ1 λ3
≈ 2
≈ 2
≈ . 0.25 0.25
r λ1 λ2 λ1 λ2 0.00 0.00
30 40 50 60 70 30 40 50 60 70
theta (degree) theta (degree)
5 R ESULTS (c) (d)

We apply our method to estimate the true pupil center using the pupil Figure 2: Estimation errors in different camera positions. x axis
ellipse and the iris ellipse and evaluate it on synthetic data. corresponds to rotation angle θ in degree and y axis is the estimation
error measured in pixels. From (a) to (d), rotation angle φ changes
5.1 Experimental Setup from 10◦ to 40◦ . In each plot, θ varies from 30◦ to 70◦ .
Synthetic eye images are rendered using the 3D model proposed
in [21] with cornea refractive index set to be 1.0 (see Figure 3 for
examples). The true projected pupil center in the camera image is angle of the eye measured in degree. As expected, large pupil size
directly computed from the 3D model, and we also compute the leads to large estimation error. All estimation methods get worse
pupil ellipse and the iris ellipse in the image plane. The pupil center but our method consistently provides the most accurate estimation.
estimated by our method is compared to 1) the Euclidean center of Once again, image based ellipse fitting method fails when the eyes
the pupil ellipse in the image plane and 2) the pupil center estimated are tilted to some extend as shown by the second image in each plot.
by image based ellipse fitting following the method proposed in [20].
Eye camera is placed 3 cm in front of the eye, similar to the
camera position in a mobile eye tracker. We experiment with various 6 D ISCUSSION AND F UTURE W ORK
viewing angles (rotations of the camera), and three different pupil We introduce a simple algorithm to robustly estimate the true center
sizes when the eyes fixate at various targets. Fixation targets are position from a pair of ellipses projected from a pair of concentric
evenly sampled from a circle that is placed perpendicular to the circles. We apply our method in the context of eye tracking and use
ground. We compare the Euclidean distance in pixel unit between the it to find the true pupil center in the image plane. Evaluation based
true pupil center in the image plane and the estimated pupil center on synthetic data shows promising results, especially comparing to
using different methods. the estimated center of the fitted ellipse. Even though we did not
evaluate the performance of the method on the estimation of pupil
5.2 Camera Rotation
size, we believe it is possible to estimate the pupil size using the
With a fixed distance to the eye, the camera is placed at a set of radii ratio.
locations that are possible in practice. We use spherical coordinates Despite the fact that our method can accurately estimate the true
to describe the rotations. Polar angle φ defines the rotation angle pupil center under projective distortion, it does not consider refrac-
between the camera and the horizontal plane, and azimuthal angle θ tion, making it unsuitable for real-world eye tracking applications.
describes the camera rotation in the horizontal plane. However, note that our formulation allows us to estimate the radii
Figure 2 shows the estimation errors using different methods. ratio of two concentric circles, the pupil and the iris in this case,
We test with φ varying from 10◦ to 40◦ and θ varies from 30◦ to and the iris boundary is much less affected by the refraction at the
70◦ . In all tested scenarios, our method gives the best estimation corner. Theoretically we could use the estimated ratio to find the
with less than one pixel distance to the true pupil center. Estimated exact position of the pupil center as well as the iris center in the
Euclidean centers of the pupil ellipse deviate away from the true camera image. In other words, this would allow us to implicitly
pupil center. Image based ellipse fitting gives better estimation as model the corner refraction and find the true pupil center under both
more sample points from the boundary detection are used for ellipse distortions in the real scenarios.
fitting. However, the fitting fails when θ is large. In such cases, the
Beyond the scope of eye tracking, concentric circles pattern
projected pupil is small in the camera image, subsequently with less
has been commonly used as fiducial markers in computer vision
camera distortion. Therefore, the estimated pupil positions get closer
related tasks [4, 5]. Since detection accuracy and speed are crucial
to its true position. As shown in Figure 2, estimation errors decreases
for fiducial marker based real-time application, our method could
with an increasing θ from left to right in each plot.
provide another option for fiducial marker based tracking. We are
5.3 Pupil Size eager to investigate in this direction in future work.
As we see from previous test, the estimation accuracy of the Eu-
clidean ellipse center is correlated to the pupil size in the image plane. R EFERENCES
In this second experiment, we experiment with three different pupil [1] K. Alberto Funes Mora and J.-M. Odobez. Geometric generative gaze
sizes and compare the estimations when the eyes look at different estimation (g3e) for remote rgb-d cameras. In The IEEE Conference on
targets. We evenly place 36 targets on a circle that lies perpendicular Computer Vision and Pattern Recognition (CVPR), June 2014.
to the ground. Figure 3 shows the results where estimation error [2] D. A. Atchison, G. Smith, and G. Smith. Optics of the human eye.
measured in pixel is the y-axis and x-axis corresponds to the rotation Butterworth-Heinemann Oxford, 2000.
5 Our estimation 5 Our estimation 5 Our estimation
Euclidean center Euclidean center Euclidean center
4 Image fitting 4 Image fitting 4 Image fitting
error (pixels)

error (pixels)

error (pixels)
3 3 3

2 2 2

1 1 1

0 0 0
0 50 100 150 200 250 300 350 0 50 100 150 200 250 300 350 0 50 100 150 200 250 300 350
angle (degree) angle (degree) angle (degree)
Figure 3: Estimation errors with various pupil size. Camera is placed at a fixed position in front of the eye while fixation targets are evenly sampled
from a circle in front. Pupil size increases from left to right and embedded eye images show examples when the eyes are rotated to the left and
right. x axis corresponds to rotation angle of the eyes in degree and y axis is the estimation error measured in pixels.

[3] A. Barsingerhorn, F. Boonstra, and H. Goossens. Optics of the hu- .1145/2638728.2641695


man cornea influence the accuracy of stereo eye-tracking methods: a [17] O. Palinko, A. L. Kun, A. Shyrokov, and P. Heeman. Estimating
simulation study. Biomedical optics express, 8(2):712–725, 2017. cognitive load using remote eye tracking in a driving simulator. In
[4] L. Calvet, P. Gurdjos, C. Griwodz, and S. Gasparini. Detection and Proceedings of the 2010 Symposium on Eye-Tracking Research &
accurate localization of circular fiducials under highly challenging Applications, ETRA ’10, pp. 141–144. ACM, New York, NY, USA,
conditions. In 2016 IEEE Conference on Computer Vision and Pattern 2010. doi: 10.1145/1743666.1743701
Recognition (CVPR), pp. 562–570, June 2016. doi: 10.1109/CVPR. [18] C. Palmero, J. Selva, M. A. Bagheri, and S. Escalera. Recurrent cnn for
2016.67 3d gaze estimation using appearance and shape cues. In BMVC, 2018.
[5] J. DeGol, T. Bretl, and D. Hoiem. Chromatag: A colored marker and [19] S. Park, X. Zhang, A. Bulling, and O. Hilliges. Learning to find eye
fast detection algorithm. In The IEEE International Conference on region landmarks for remote gaze estimation in unconstrained settings.
Computer Vision (ICCV), Oct 2017. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research
[6] K. Dierkes, M. Kassner, and A. Bulling. A novel approach to single & Applications, ETRA ’18, pp. 21:1–21:10. ACM, New York, NY,
camera, glint-free 3d eye model fitting including corneal refraction. In USA, 2018. doi: 10.1145/3204493.3204545
Proceedings of the 2018 ACM Symposium on Eye Tracking Research [20] L. Świrski, A. Bulling, and N. Dodgson. Robust real-time pupil tracking
& Applications, ETRA ’18, pp. 9:1–9:9. ACM, New York, NY, USA, in highly off-axis images. In Proceedings of the Symposium on Eye
2018. doi: 10.1145/3204493.3204525 Tracking Research and Applications, ETRA ’12, pp. 173–176. ACM,
[7] A. T. Duchowski. Eye tracking methodology. Theory and practice, New York, NY, USA, 2012. doi: 10.1145/2168556.2168585
328, 2007. [21] L. Świrski and N. Dodgson. Rendering synthetic ground truth images
[8] J. Faigl, T. Krajnk, J. Chudoba, L. Peuil, and M. Saska. Low-cost for eye tracker evaluation. In Proceedings of the Symposium on Eye
embedded system for relative localization in robotic swarms. In 2013 Tracking Research and Applications, ETRA ’14, pp. 219–222. ACM,
IEEE International Conference on Robotics and Automation, pp. 993– New York, NY, USA, 2014. doi: 10.1145/2578153.2578188
998, May 2013. doi: 10.1109/ICRA.2013.6630694 [22] L. Świrski and N. A. Dodgson. A fully-automatic, temporal approach to
[9] C. Fedtke, F. Manns, and A. Ho. The entrance pupil of the human eye: single camera, glint-free 3d eye model fitting [abstract]. In Proceedings
a three-dimensional model as a function of viewing angle. Opt. Express, of ECEM 2013, Aug. 2013.
18(21):22364–22376, Oct 2010. doi: 10.1364/OE.18.022364 [23] A. Szczepański, K. Misztal, and K. Saeed. Pupil and iris detection
[10] W. Fuhl, T. C. Santini, T. Kübler, and E. Kasneci. Else: Ellipse selection algorithm for near-infrared capture devices. In IFIP International Con-
for robust pupil detection in real-world environments. In Proceedings ference on Computer Information Systems and Industrial Management,
of the Ninth Biennial ACM Symposium on Eye Tracking Research & pp. 141–150. Springer, 2014.
Applications, ETRA ’16, pp. 123–130. ACM, New York, NY, USA, [24] K. Ukai and P. A. Howarth. Visual fatigue caused by viewing stereo-
2016. doi: 10.1145/2857491.2857505 scopic motion images: Background, theories, and observations. Dis-
[11] E. D. Guestrin and M. Eizenman. General theory of remote gaze estima- plays, 29(2):106 – 116, 2008. Health and Safety Aspects of Visual
tion using the pupil center and corneal reflections. IEEE Transactions Displays. doi: 10.1016/j.displa.2007.09.004
on Biomedical Engineering, 53(6):1124–1133, June 2006. doi: 10. [25] A. Villanueva and R. Cabeza. Evaluation of corneal refraction in a
1109/TBME.2005.863952 model of a gaze tracking system. IEEE Transactions on Biomedical
[12] S. Y. Gwon, C. W. Cho, H. C. Lee, W. O. Lee, and K. R. Park. Robust Engineering, 55(12):2812–2822, Dec 2008. doi: 10.1109/TBME.2008.
eye and pupil detection method for gaze tracking. International Journal 2002152
of Advanced Robotic Systems, 10(2):98, 2013. doi: 10.5772/55520 [26] X. Zhang, Y. Sugano, M. Fritz, and A. Bulling. Mpiigaze: Real-world
[13] K. Holmqvist, M. Nyström, R. Andersson, R. Dewhurst, H. Jarodzka, dataset and deep appearance-based gaze estimation. IEEE Transactions
and J. Van de Weijer. Eye tracking: A comprehensive guide to methods on Pattern Analysis and Machine Intelligence, 41(1):162–175, Jan 2019.
and measures. OUP Oxford, 2011. doi: 10.1109/TPAMI.2017.2778103
[14] I. T. Hooge, R. S. Hessels, and M. Nystrm. Do pupil-based binocular
video eye trackers reliably measure vergence? Vision Research, 156:1 –
9, 2019. doi: 10.1016/j.visres.2019.01.004
[15] G. Jiang and L. Quan. Detection of concentric circles for camera
calibration. In Tenth IEEE International Conference on Computer
Vision (ICCV’05) Volume 1, vol. 1, pp. 333–340 Vol. 1, Oct 2005. doi:
10.1109/ICCV.2005.73
[16] M. Kassner, W. Patera, and A. Bulling. Pupil: An open source plat-
form for pervasive eye tracking and mobile gaze-based interaction.
In Proceedings of the 2014 ACM International Joint Conference on
Pervasive and Ubiquitous Computing: Adjunct Publication, UbiComp
’14 Adjunct, pp. 1151–1160. ACM, New York, NY, USA, 2014. doi: 10

You might also like