Automatic Eyewinks Interpretation System Using Face Orientation Recognition For Human-Machine Interface
Automatic Eyewinks Interpretation System Using Face Orientation Recognition For Human-Machine Interface
3.3 Algorithm
1 n
and y yi
n i 1
c c12
C 11
c22
Using these, construct a Matrix
c21
3.4.1 Eigen Values computation
d
1
2
L c11 c22 c11 c22 4c122
2
Figure 3: Results of face detection: (a) The original image S
1
2
c11 c22 c
11 c22 4c12
2 2
(b) Hue distribution of the original image (c) Skin color (iv) Solve for eigen vector V in CV = V
projection and (d) Possible eye region
An eye edge image is represented by a feature vector
consisting of the edged pixel values. We manually select
know whether the identified face region is accurate or not. the two classes: positive set (eye) and negative set (non-
However, if the following eye detection cannot locate an eye). The eye images are processed by using histogram
eye region, then the detected face region is not a false equalization and their image sizes are normalized to
alarm. After the face region detection, there may be more 20×10 pixels. Fig. 4 shows the samples consisting of open
than one face-like region in the block image. We select the eye images, closed eye images, and non-eye images. The
maximum region as the face region. We assume that eyes eye detection algorithm will search every candidate image
should be located in the upper half face area. Once the block inside the possible eye region to locate the eyes.
face region is found, it may be assumed that the possible Each image block is processed by Sobel edge detector.
eye region is the upper portion of the face region. These
eyes are searched within the yellow rectangle area only.
The above steps are illustrated in Fig. 3 for a sample
image.
1 n
Co Variance( x, y ) c12 c21 ( xi x ) ( yi y )
n i 1
1 n
where x xi
n i 1
b
IJCSNS International Journal of Computer Science and Network Security, VOL.9 No.5, May 2009 159
Region of Region of
d Region of support
support support
Radius
Figure 4 (a) Original open eye (b) Original closed eye (c) w=9x9
(pixels) w=5x5 w=7x7
Binarized open eye and (d) Binarized closed eye
λs λL λs λL λs λL
3.4.2 Face position tracking 30 0.249 0.249 1.397 1.398 2.542 2.544
Face orientation tracking is applied to find the 50 0.089 0.090 0.465 0.465 0.840 0.841
face position in each frame by using template matching.
Given the detected face positions in the previous frame,
70 0.042 0.042 0.231 0.231 0.419 0.422
the positions in subsequent frames can be tracked frame
by frame. Once the position is correctly localized, we
90 0.023 0.023 0.140 0.140 0.256 0.257
update the facial templates (gray level image) for face
position tracking in the
next frame. The different orientations of the face for a
sample image are illustrated in figure 5. For each of the
orientations, the obtained small eigen values are presented
in Table 1. Similarly, for face with different radii (Fig. 6),
Figure 5 Different eye orientations captured the obtained small and large eigen values are presented in
Table 2.
Table 1 Slope versus Small Eigen values It is observed that, in the closed eye image, the
centroid is located at the center of eyelashes instead of the
s
Small Eigen values center of bounding box. Based on the centroids of the
Slope binarized images, the eye tracking will be faster and more
angle (θ) accurate.
W=3x3 W=5x5 W=7x7
4. Proposed Methodology
00 0.00000 0.00000 0.00000
The methodology for face orientation detection is
100 0.09456 0.09538 0.09745 given as follows:
150 0.10372 0.10486 0.10549 Step 1: For the given grayscale image, find the edge image
using suitable edge detection operators.
300 0.11645 0.11823 0.11942 Step 2: Obtain the eigen values of covariance matrix for
edge image.
450 0.00000 0.00000 0.00000 Step 3: Perform the Hough Transform (HT) using sparse
matrix technique.
160 IJCSNS International Journal of Computer Science and Network Security, VOL.9 No.5, May 2009
Step 4: Find the meaningful set of distinct Hough peaks Table 3 Predefined Code lengths
using the following steps (Neighborhood suppression
scheme):
i) Find the accumulator cell containing the
highest value and record its location.
ii) Set to zero the accumulator cells in the
immediate neighborhood of the maximum found. Select
the window size based on the accuracy needed.
iii) Repeat the above steps until the desired peaks
have been found. 5. Experimental Results
Step 5: Once the candidate peaks and their locations are
identified, find the coordinates with respect to a particular A Logitech Quick Cam Pro 3000 camera was
primitive using Bresenham’s Raster Scan algorithm. used to capture the video sequence of the disabled at a
Step 6: Construct a full matrix for the nonzero pixels picture resolution of 320×240 pixels. The system was
obtained from step 3 for a corresponding primitive. implemented on a PC with Athlon 3.0 GHz CPU
Mamatha. M. N. received her M.E. Technology, Madras. He has industrial and teaching
degree in Electronics from University experience, having worked both in India and USA,
of Bangalore in 1999. She received her designing systems and teaching/guiding students and
B.E. degree in Instrumentation from practicing engineers based on FPGAs and
Mysore University in 1993. Presently, Microprocessors. His research interests include
she is working as an assistant developing algorithms, architectures and implementations
professor in B. M. S. College of on FPGAs/ASICs for Video Processing, DSP applications,
engineering, Visvesvaraya reconfigurable computing, open loop control systems, etc.
Technological University. She is presently doing a Ph. D. He has a number of papers in International Journals and
Research in Vinayaka Missions University, Conferences. He is the recipient of the Best Design Award
Salem,Tamilnadu. Her areas of interest are biomedical at VLSI Design 2000, International Conference held at
instrumentation and transducers. She has presented papers Calcutta, India and the Best Paper Award of the Session at
in national and International Conferences. WMSCI 2006, Orlando, Florida, USA. He has completed
a video course on Digital VLSI System Design at the
Dr. S. Ramachandran has wide Indian Institute of Technology Madras, India for broadcast
academic as well as industrial on TV by National Programme on Technology on
experience for over 30 years, Enhanced Learning (NPTEL). He has also written a book
having worked as Professor in on Digital VLSI Systems Design, published by Springer
various engineering colleges as well Verlag, Netherlands (www.springer.com).
as design engineer in industries.
Prior to this, he has been with the Indian Institute of
Appendix 1
The eigen values for different orientations and regions
Region of support
Orientation W = 5x5 W = 7x7