Enhancing Face Recognition by Image Warping: Jorge Garcia Bueno
Enhancing Face Recognition by Image Warping: Jorge Garcia Bueno
by image warping
October 2009
”People are not remembered by the number of times they fail but for the number of times
they succeed”
”Las personas no son recordadas por el número de veces que fracasan sino por el número
de veces que tienen éxito”
Abstract
University of Glasgow
Department of Electronics and Electrical Engineering
Bachelor of Engineering
This project has been developed as an improvement which could be added to the actual
computer vision algorithms. It is based on the original idea proposed and published by
Rob Jenkins and Mike Burton about the power of the face averages in artificial recog-
nition.
The present project aims to create a new automated procedure applied for face recogni-
tion working with average images. Up to now, this algorithm has been used manually.
With this study, the averaging and warping process will be done by a computer au-
tomatically saving large amounts of time. Through a clear user interface, the program
that has been developed will receive a batch of face images of a person and will create an
average picture of them deforming each one of them based on . Some settings (colours,
size, etcetera ...) might be edited before any average is created and some options will
be offered after the job is done to facilitate the addition of them to a face database. It
is demonstrated in previous studies that the average picture generated contains most of
the information of the group of original faces and therefore, a system would recognise
this person easily than with any single image.
After the development of the software, a computational study will be done to locate the
quality in terms of accuracy and speed of this solution. The program will be asked to
learn a batch of faces of a group of people and afterwards it will be tested and compared
with actual works to demonstrate if the algorithm is at the same level of quality.
Contents
Abstract ii
List of Tables x
Abbreviations xi
1 Resumen 1
1.1 Acta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Presentación . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Resultados . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Publicación . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Introduction 6
2.1 Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Applications for the solution . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 Theoretical Aspects 10
3.1 Digital Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.2 Colour Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.2.1 RGB Space . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.2.2 Grey Colour Space . . . . . . . . . . . . . . . . . . . . . . 12
3.1.2.3 HSV Colour Space . . . . . . . . . . . . . . . . . . . . . . 13
3.1.3 Mapping in digital images . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Image warping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.2 Basic transformations . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.3 Actual algorithms for image warping . . . . . . . . . . . . . . . . . 17
3.2.3.1 Transformation with one pair of lines . . . . . . . . . . . 17
3.2.3.2 Transformation with multiple pairs of lines . . . . . . . . 18
iii
Contents iv
B HSI-to-RGB Conversion 69
D User’s manual 72
D.1 Introduction to the application . . . . . . . . . . . . . . . . . . . . . . . . 72
D.1.1 Average face creation . . . . . . . . . . . . . . . . . . . . . . . . . 72
D.1.2 Database search . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
D.2 Manual face creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
D.2.1 Text menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
D.2.2 Main options toolbar . . . . . . . . . . . . . . . . . . . . . . . . . . 74
D.2.3 Secondary options toolbar . . . . . . . . . . . . . . . . . . . . . . . 74
D.2.3.1 Single node administration . . . . . . . . . . . . . . . . . 74
D.2.3.2 Group of nodes administration . . . . . . . . . . . . . . . 75
D.2.3.3 All nodes administration . . . . . . . . . . . . . . . . . . 76
D.2.4 Working Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Contents vi
Bibliography 90
vii
List of Figures viii
5.1 Example of some single faces introduced in the face database to do the
experiments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2 Average images generated by the application. Left images are manual
average faces and right images are automatic average faces. . . . . . . . . 52
5.3 Example of some non warped faces. . . . . . . . . . . . . . . . . . . . . . 53
5.4 Results comparing the recognition rate between a single face and the
automatic average face (1). . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.5 Results comparing the recognition rate between a single face and the
automatic average face (2). . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.6 Results comparing the recognition rate between a single face and the
automatic average face (3). . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.7 Results comparing the recognition rate between a single face and the
automatic average face (3). . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.8 Results comparing the overall recognition rate . . . . . . . . . . . . . . . . 63
x
Abbreviations
xi
Dedicated to Rosalı́a, Jorge, Luis and Raquel
who recognise my face without problem . . .
xii
Dedicado a Rosalı́a, Jorge, Luis y Raquel
que reconocen mi cara sin problemas . . .
xiii
Chapter 1
Resumen
1.1 Acta
1.2 Presentación
El presente Proyecto de Fin de Carrera tiene como objetivo primordial desarrollar una
aplicación capaz de mejorar los indices de reconocimiento facial actuales mediante un
nuevo método basado en la generación de una cara media a partir de la suma de varias
caras individuales de una persona. La cara media generada servirá como referencia de
dicho sujeto. Según los estudios realizados por Mike Burton y Rob Jenkins en los
últimos años 1 , las probabilidades de reconocer a una persona basándose en su cara
1
Profesores titulares del Departamento de Psicologı́a en la Universidad de Glasgow
1
Información Resumen del trabajo realizado 2
Con este Proyecto se demuestra que dicho proceso de ”averaging” es correcto, aplicable
y robusto mediante la implementación de un programa multiplataforma escrito en C++
usando las librerias Qt4 en el diseño del entorno gráfico. Los pasos seguidos en este
texto para conseguir dicho objetivo han sido los siguientes:
Figure 1.5: Esquema completo de la vida de la aplicación con todos sus pasos inter-
medios.
Información Resumen del trabajo realizado 5
1.3 Resultados
1.4 Publicación
Este proyecto fue seleccionado por la Universidad de Glasgow para optar a varias con-
ferencias/galardones en Reino Unido:
2. SET Awards 2009 - Premios al mejor tutor y estudiante por proyecto de inves-
tigación en la modalidad de Computer Science
Chapter 2
Introduction
Computer face recognition is nowadays one of the most difficult tasks to be solved in
terms of computer vision. During the last 10 years [1] this idea has been investigated
and the results obtained have never been completely successful. In some cases, simple
objects with straightforward shapes were identified correctly over carefully controlled
backgrounds. In other cases like Optical Character Recognition, systems are able to de-
tect and process texts with accuracies up to 95.38% [2].
Nevertheless artificial recognition has to deal with the real world and that is what makes
this process extremely hard to be understood by machines. Faces are made up of wide
varieties of lines, distances and shapes. As natural objects, human faces are formed
by complex edges that make computers desperately difficult to approach them to any
mathematical model. To make it more complex, human faces can change spontaneously
through physical aspects like a beard or emotional expressions as a smile. Furthermore,
the passage of time causes physical changes that barely can be predicted [3]. In spite
of all these obstacles, face recognition appears to be a challenge not only for computers
but also for human beings to teach computers how to deal with them.
2.2 Background
The earliest studies started in 50’s by psychologists. Afterwards, some years later engi-
neers also took part in the investigations. A few face expression studies were followed
6
Chapter 1. Introduction. A general Overview 7
during the 60’s. The search on automatic machine recognition faces happened over the
past 30 years by psycho physicists, neuroscientists and engineers within research on var-
ious aspects of face recognition by humans and machines [4].
At the same time, engineers have tried to formulate the problem as recognition of 3D
objects based on flat images in 2D. As a result, they adopted typical techniques of
pattern classification where the face characteristics were used as attributes, obtaining
semi-automated systems. (See the Bledsoe experiment for details [5]). Others focused
the problem with subjective marks like ear’s length or lip’s thickness complicating much
more the automation of the procedure.
In the 80’s there was a surprisingly stop. Researchers were confused and all the results
pointed a lack in technology support. Nonetheless during the early 90’s, the commercial
attention, technology advances and security demand encourage engineers and psycholo-
gists to continue the investigations extending the paths of interest in the matter.
During the last 15 years, the main target of the researches was to create a fully-
automated system eliminating the difficulties due to face location inside an input image
or the extraction of facial characteristics (eyes, mouth, nose, eyebrows, etc ...). In the
meantime, important advances where reached in algorithms like Eigenfaces or Fisher-
faces for face recognition and patterns detection.
Applications in face recognition are widespread. Focusing on security and control as-
pects, it would be very useful to automate the detection of people entering in a building,
track the people that are entering in a plane or directly manage the people that take
a walk in the street. This new concept of people control is being promoted by several
countries as United Kingdom [6] or United States of America [7] due to terrorism at-
tacks. On the other hand, face recognition can be very useful as a biometric method of
identification to have access to any system like a computer or a building.
It is important to emphasize that face recognition is one of the leading fields in security
and control environments. Several governments have invested huge quantities of money
and time to investigate for applicable solutions. For instance, the US government has
performed multiple evaluations to determine the capabilities and limitations of face
Chapter 1. Introduction. A general Overview 8
recognition, and to encourage and direct future development. The Face Recognition
Technology Evaluation (FERET) , sponsored from 1993-1997 by the Defense Advanced
Research Products Agency (DARPA) was an effort to encourage the development of this
technology [8]. Large firms like Google Inc. are also performing investigations in these
aspects after the acquisition of Never Vision Inc. in 2006. For this reason, the face
recognition improvement would be useful to improve actual applications in some fields:
Several enterprises like L-1 offer devices installed at corporate buildings, banks, hos-
pitals, airports and other kinds of entry points that scan faces and decide when you
are welcomed [9]. Other companies like My Heritage (See Figure 2.2 and Figure 2.1 )
determine which celebrities you look like more or reveal if sons and daughters are more
like their fathers rather than their mothers and vice-versa.
3.1.1 Introduction
f (0, 0) f (0, 1) ··· f (0, N − 1)
f (1, 0) f (1, 1) ··· f (1, N − 1)
f(x,y) =
.
.. .
.. .. ..
. .
f (M − 1, 0) f (M − 1, 1) · · · f (M − 1, N − 1)
Image analysis transforms any image in attributes (e.g. textures, distances, shapes or
borders). To do that two tasks are required: segmentation (divide an image in small
well delimited regions) and description (extract values that define the object uniquely).
Finally, the target of the recognition technique is to interpret these objects, analyse their
attributes and establish labels to those objects, making public this information to be used
in following systems. To do that, the most extended method to determine which kind of
object has been found is through a classifier, providing some intelligence to the machine.
10
Chapter 2. Theoretical Aspects 11
Each one of the values of the matrix f(x,y) represent a colour, generally made of two com-
ponents: Luminance1 and Chrominance2 . A colour space is a mathematical model that
describes how to represent a colour just with numbers, typically as three components.
During the following sections, the most used spaces will be introduced.
Most of the algorithms and applications are expected to deal with RGB colour space.
That means, Red, Green and Blue components that define any colour can be separate
independently in three identical dimension matrices. This model is based on a Cartesian
Axis system, being the space a normalized cube where pure values red, green and blue
are located in the vertices (1, 0, 0) , (0, 1, 0) , (0, 0, 1) respectively:
In relation with the software developed for this project, it is important to notice that
all the algorithms have been developed using the RGB space because of the simplicity it
offers. In the other hand RGB space has a weak point: unexpected changes in ambient
light will modify rapidly the intensity of the pixels. Thus, more robust systems could
be used to avoid abrupt changes due to ambient light.
1
Definition: Luminous intensity per unit area projected in a given direction
2
Definition: Signal used in images systems to express the colour information of the picture, indepen-
dently from the accompanying luminance signal
Chapter 2. Theoretical Aspects 12
One of the first applications of RGB space is the conversion into grey space. To convert
a RGB pixel into a grey scale, there is not a correct conversion, depending always on the
human perception and the sensitivity response curve of the camera used. If the three
intensities have the same radiance in the visible spectrum (3.1)
then the green will appear the brightest of the three due to the luminous efficiency
function peaks inside the green region of the spectrum.
Several proposals have been discussed and approved, being two of them more used:
Craig’s approach (3.2) [11] and the model defined by the Commision Internationale de
l’ Eclairege (CIE) in 1931 based on human perception (3.3) [12] [13]:
The results applying the previous methods are displayed in Figure 3.2.
Figure 3.2: Examples of grey scales (Oban, Scotland 2008). Original picture, same
radiance, Craig’s and CIE models.
Chapter 2. Theoretical Aspects 13
The geometrical representation of this space is a cone (See figure 3.3). Vertical position
corresponds to brightness (or value), radial position means saturation and angular posi-
tion is for huge.3 The main advantage of this system of representation is the abstraction
of the hue component from the image colour, allowing intensity changes without modi-
fying any colour saturation [13]. However, the problem is that the conversion from RGB
to HSV and vice versa is not lineal (See Appendix B for further details). Images are by
general rule stored in RGB format producing large computational cost if it is necessary
to move into this space.
There are several techniques for mapping an image. The classic two are reverse mapping
and forward mapping. The difference between both is the way they are processed.
The first one is called forward mapping because the pixels are scanned and copied
from the source image to the destination image taking into account the result of the
warping function. The second one is called reverse mapping because for every pixel in
the destination takes a value from the source image (see Figure 3.4). Thus, for this
option to be valid the inverse of the warping function must exist (this is not always true,
provoking holes or artefacts inside the image). The characteristic of reverse mapping
is that it is ensured a value in all the pixels in the destination image while in forward
mapping leaving some of the destination points unevaluated is possible.
3
Hue can be normalized between 0 and 100% for some applications, but normally varies between
0o ↔ 360o
Chapter 2. Theoretical Aspects 14
Figure 3.4: Mapping an image. Forward mapping in the upper image and inverse
mapping in the bottom image.
1. Forward mapping:
Send each pixel f (x) to its corresponding location x0 = h(x) in g(x0 )
2. Inverse mapping:
Get each pixel g(x0 ) from its corresponding location x = h−1 (x0 ) in f (x).
Another procedure to map an image is to use Cross Segment Algorithm (CSA). This
algorithm pays more attention to the relationship between the reference and the desired
images [14]. Due to this method eliminates the searching cost, CSA became an accelerate
method of inverse warping. However, sometimes is not possible to know the generalized
disparity of the pixel in inverse warping and then the operation cannot be performed
directly.
There have been proposed several optimized methods to avoid the problem of time and
try to increase the speed [14]. All of them are classified in two ideas: increase speed
according to the properties of the epipolar line or acquiring the generalized disparity of
the desired image.
Chapter 2. Theoretical Aspects 15
For this project application reverse mapping has been applied to make the process
secure and establish a value in all the pixels of the image requiring as less resources in
the machine as possible.
3.2.1 Introduction
Image warping is the process which transforms an image into a new deformed image fol-
lowing a pattern or function. This process is actually used to recover distorted pictures
projected into surfaces or to simply to create morphing effects for films or animations.
Warping procedure estimate the new coordinates of an original pixel based on constraints
defined beforehand.
Image warping has been used during last decades in medical imaging, computer vision,
and computer graphics [15]. Main efforts have been focused on security, special effects
for movies and scientific investigations. These deformations have become an interesting
aspect because of the apparition of new computers capable of calculate tedious and dif-
ficult operations in real time. The fast increase in powerful machines, embedded devices
and real time communications has made feasible the treatment of images, its processing
and of course their recognition.
There are a wide variety of algorithms that provide warped images. All of them are
based on geometrical relations between the pixel and the deformation function to be
followed. Depending on the objective necessarily to reach, some are more recommended
than others. The critical features of the image warping algorithms are speed, accu-
racy and CPU resources (algorithm’s complexity). Because there are different geometric
transformation techniques for digital images, a study of part of them has been done for
the development of the application, and therefore it was decided which one fits better
within the project requirements.
For this project, due to the quantity of images to be processed is quite high, in real time
and the size of them might be adjusted, the best choice will be the faster assuming that
the output image is truthful enough to be pre-processed later.
Chapter 2. Theoretical Aspects 16
1. Scale by a factor s
x0 = s · x
y0 = s · y
2. Rotate by an angle θ
x0 = x · cos(θ) − y · sin(θ)
y 0 = x · sin(θ) + y · cos(θ)
Each one of them is based on a different function that describes the destination axis
(x0 , y 0 ) for every location in the source (x, y). This method can be applied vice versa if
the lineal application is considered invertible.
These are the basic functions, but there are infinite equations that can be applied to
create almost any kind of deformation in the destination picture taking values of the
original pixels. In Figure 3.5 some samples are displayed.
Figure 3.5: Warping examples over the same image: Shear, affine, perspective and
waves.
Chapter 2. Theoretical Aspects 17
If exist a pair of lines, the first one referred to the source image and the second one
referred to the destination image, any pixel in the source image can be transformed
based on a mapping relation between both lines (See Figure 3.6). This algorithm has
been extracted from [15], [16].
If X 0 and X are any pair of coordinates in the source picture and destination picture
respectively and P 0 Q0 and P Q are the lines defined by extreme points in the source and
destination pictures respectively, it is possible to define the following equations:
(X − P ) − (Q − P )
u= (3.4)
kQ − P k2
(X − P ) · P erpendicular(Q − P )
v= (3.5)
kQ − P k2
v · P erpendicular(Q0 − P 0 )
X 0 = P 0 + u · (Q0 − P ) + (3.6)
kQ0 − P 0 k2
where in 3.5 and 3.6 Perpendicular() returns the vector perpendicular to, and the same
length as, the input vector in the argument. The value u corresponds to the position
along the line P Q while v is the distance from the line to the pixel in the source image.
The range of values of u goes between 0 and 1 as long as the pixel moves from P to
Q and it could exceed these values outside this range. The value v corresponds to the
perpendicular distance from the position of the pixel to the line P Q.
This option applies to each pixel of the whole image a coordinate transformation by
rotation, translation and/or scale. The scale will be done only in the direction of the
line while the rotation and the translation will depend on the coordinates of the pixel.
The method to be used with this algorithm 3.1 is described as the following:
Figure 3.6: Single pair of lines method. Description of the variables involved
Due to this process is done using reverse mapping, it ensures that all the pixels in the
destination image will be defined one by one by any of the pixels in the source image.
Some examples of this algorithm have been displayed in the Figure 3.7.
Figure 3.7: Single pair of lines method. Operations applied to each pixel. Original,
displacement, rotation, scale.
Since faces contains more than a single line to control all the relevant facial features
(outlines, chin borders, eyes’s lines, etcetera . . . ), it is necessary to implement this
method for multiple lines. This new improvement is described in the following section.
The big innovation for this method consists on the management of multiple lines during
the morphing process. In this case, the interaction of each pixel with all the lines will
be appreciated. As [16], [17] explains, the closer the distance between the pixel and any
line, the bigger would be the interaction between them. To fix this problem a weighting
of the coordinate position for each line is applied following the Equation 3.7.
Chapter 2. Theoretical Aspects 19
lengthp
weight = ( )b (3.7)
a + distance
Where length is the longitude of each line, distance is the distance between each pixel
and that line and a, b and p are constants to determine the finest approach. The values
of these constants have been studied and bounded in Table 3.2.3.2
The graphical representation of the method helps for the understanding of the algorithm.
In the Figure 3.8 the illustration on the left represents the coordinates in the destination
while the right one represents the source.
X 0 is the point we are sampling from the source picture and X is the destination position.
This new point is computed with the weighted average of the two pixel locations X10 and
X20 . The resulting graphic of this method reveals that points in the same line or very
close to them are moved in the same direction as the line and pixels further from one
a specific line are affected by each one of them. Some examples of this algorithm are
illustrated in the next Figure 3.9.
Chapter 2. Theoretical Aspects 20
Figure 3.8: Multiple pairs of lines method. Description of the variables involved.
Figure 3.9: Multiple pairs of lines method. Description of the variables involved.
The principle of this method is to create deformed pictures based on a triangular mesh.
It is based on piecewise polynomial transformations. Supposing that the original and
final images have attached a mesh that define the location of the most significant fea-
tures and both meshes are formed by simple triangles, the final mesh’s triangles can be
supposed as a change in the coordinates of the corresponding original triangles.
The following method can be applied hold up by the thought that calculating the
barycentric coordinates of the final mesh for each pixel, makes possible to obtain the
equivalent location of the pixel in the original triangle. Then, the alteration of the posi-
tion and colour of all the pixels included inside one triangle would be warped to the final
shape. So, with this method is possible to find for each triangle in the warped mesh an
equivalent in the original one and copy its intensity value to the warped image through
a triangle to triangle correspondence [18].
Considering a triangle T defined by its three vertices A, B and C, any point P situated
inside the triangle is considered as:
Chapter 2. Theoretical Aspects 21
P =α·A+β·B+γ·C (3.8)
Where α, β and γ are the barycentric coordinates, that also fulfil the equation
α+β+γ =1
To obtain the barycentric coordinates of any point P (x, y) in a given triangle with ver-
tices A(x, y), B(x, y) and C(x, y)):
P x = α · Ax + β · B x + γ · C x
P y = α · Ay + β · B y + γ · C y
where γ = 1 − α − β
Px = α · Ax + β · Bx + (1 − α − β) · Cx
Py = α · Ay + β · By + (1 − α − β) · Cy
! ! !
Px − Cx Ax − Cx Bx − Cy α
= · (3.9)
Py − Cy Ay − C y B y − Cy β
That is
! ! !
Ax − Cx Bx − Cy Px − Cx α
M= =⇒ =M· (3.10)
Ay − Cy B y − Cy P y − Cy β
γ =1−α−β
With this information is possible to deform the pixels included in each one of the triangles
in the original mesh to the required shape. The computational cost is low because the
matrix to be inverted is just a 2 × 2 matrix that would never be singular if the triangle is
well defined. This method could be defined as an step between multiple lines deformation
and the next method, mesh warping.
Chapter 2. Theoretical Aspects 22
This process was pioneered by Douglas Smith for using it in the film Willow, 1980
[15]. The process followed in this method is based in changes on points of different
meshes instead of lines (see [15], [19] for more details). The initiative is to break up
both images into small sections that are plotted onto each other deformed for the morph
process. In order to apply this method, two arrays are needed. The first one contains
the source coordinates of the mesh while the other one encloses the destination mesh. It
is compulsory that both matrices have the same size to facilitate the one-to-one relation
for the needed computations.
Two passes are needed to create the final image warped. This method is called two-pass
warp. During the first pass the image is only warped vertically, but in the second pass
the image is full transformed with a horizontally warp over the pre-warped image done
before. Occasionally, in some parts of the picture could be recommended to do the first
the horizontally to maintain the picture quality [20].
Figure 3.10: Mesh warping. Operations applied to each pixel.Original mesh (a) and
desired mesh (b)
If the original image is called IS and the destination image is called ID . There are two
meshes defined as MS and MD that contains the coordinates of both meshes (See Figure
3.10). It is important to notice that both meshes are constrained to be topologically
equivalent [15]. That means that no folding, discontinuities or self-intersection between
nodes in both networks are allowed. During this process intersection some intermediate
images are created. The Algorithm 3.3 represents the steps to be followed to implement
this method.
There are several methods to implement the interpolation between two collections of
coordinates. But, the most used method is the bi-cubic spline interpolation. More pre-
cisely, the usual algorithm is Catmull – Rom spline interpolation. A deeper explanation
is attached at the end of the text for further analysis. (See Appendix A)
An example of the deformation created by using this method is displayed in Figure 3.11
based on the example of [21]. In this case the deformation has been created following a
cylindrical pattern as a destination mesh.
Figure 3.11: Mesh Warping. Example using a cylindrical destination mesh. Brooklyn
Bridge (New York City, US), desired mesh with a cylindrical deformation applied and
resulting picture once the warping is executed
To obtain a smooth and continuous batch of pictures in the transition between both
pictures is recommended to apply a cross-dissolve filter4 . This action will provide a
natural behaviour in the path between the original image and the final result.
After the previous review of the different deformation algorithms, two of them were
implemented for this project: Transformation with multiple pairs of lines and triangu-
lar warping. The results given by both methods are described in Chapter 4, being the
4
That means, a simple linear transition between images making a lineal change
Chapter 2. Theoretical Aspects 24
second method more stable and robust than the first one for the aims followed in the
study. The reason is mainly because of the theoretical concept of each method: Multiple
pairs of lines is more focused obviously on lines that define boundaries in the face while
triangular mesh warping is based on points that define a deformed space.
Because the target here is to make the process to work automatically and the actual
methods of pattern recognition estimate positions rather than contours, it is easier to
create a mesh with all these automatically detected positions instead of define those re-
gions by lines (normally the outlines of the face are not straight lines but curve lines). On
the other hand, mesh warping could be the ”next step” after triangular mesh method.
It requires large number of nodes (much more than triangular mesh method) in order
to create a smooth surface defining small variations between points. This method offers
a high continuity between the small sections defined by the nodes but due to the level
of complexity that it demands and the computer load work that it consumes, triangular
mesh warping comes up as the best choice.
Image analysis for object detection is concerned with the extraction of measurements,
data or information from an image by automatic or semi-automatic methods. By defi-
nition, the systems to be analysed are not limited to the classification of scene regions
to a fixed number of categories, but they are rather designed to provide a description
of complex scenes whose variety may be enormously large and ill-defined in terms of a
priori expectation [22].
The objective in this study will be to find as fast and accurate as possible facial patterns
defined beforehand: both eyes, nose and finally the mouth for each face. It is important
to take into account that the application developed starts from the idea that any input
image contains faces but it could be changed simply to any other static object, convert-
ing it into a portable application for future projects.
Chapter 2. Theoretical Aspects 25
The final usage for objects detection would be face deformation. Therefore, depending
on the number of objects detected inside the face, the deformation will vary, being
better as long as this quantity is increased. One of the requirements for this project
is a pseudo real-time analysis. That means that the results must be shown just-in-
time fast and ready to be used afterwards by other systems. That is the reason why
the method selected was to use Rapid Object Detection through a Boosted Cascade of
Simple Features, described in the next section.
Due to human faces are complex objects it is complicated to find features or heuristics
that could confront the huge variety of instances of the object class (e.g faces, eyes,
mouths . . . ) that may rotate in any direction, captured in different light conditions
or the simple apparition of glasses, beards or wrinkles. For such a objects, statistical
models (here called classifiers) may be trained and used to detect the desired targets.
To do that, statistical models will be taught using multiple instances of the object to
be recognized (these instances are called positive) and also multiple samples of negative
instances where the object does not appear. The collection of all these samples, positive
and negative, form a training set. During the training process, face features will be
extracted from the training set and unique features of each image will be used to classify
the object. It is important to remark that using this method, if the cascade does not
detect an existing object it is possible to add the sample to the classifier training set
and correct it for the next time.
The statistical approach used in this project has been defined using the OpenCV li-
braries (mentioned and explained later on during the following sections) based directly
on the Viola & Jones publication [23]. This option applies simple Haar-Like features
and a cascade of boosted tree classifiers as a statistical model. The classifier must be
trained on images of the same size and detection is done using a window of that size
moved along the whole picture. For each step, the algorithm check if the region looks
like the desired object or not. Furthermore, to include possible sizes of the images to
be detected, the classifier has the ability to scale the patterns. To make this method
works, it is necessary just a batch of Haar-like features and a large set of very simple
classifiers to classify the image region as the desired object or as a non desired object.
Each feature is determined by the shape of the feature, its position relative to the search
window origin and the scale factor applied on the feature. In total, 14 templates shown
Chapter 2. Theoretical Aspects 26
Figure 3.12: Set of Haar-like templates used for object detection. Edge features, line
features and centre-surround features.
As in the previous Figure, each feature is designed using two or three black or white
rectangles horizontal, vertical or rotated by 45o . To compute the Haar feature’s value,
just a weighted summation of two components is needed: the pixel sum over the whole
area of the feature and the sum over the black rectangle. Once this simple calculation
is done, the weights of both components are of opposite signs and they are normalized
dividing the summation over the black rectangle by the total area. As an example, for
the second feature:
Now, instead of computing directly the pixel sums over multiple rectangles and make
the process of detection incredibly slow, [23] defined a way to make the summations
faster: Integral Image.
X
ii(x, y) = i(x0 , y 0 ) (3.13)
x0 <x,y 0 <y
where ii(x, y) is the integral image and i(x, y) is the original image. The summation of
pixels over a single window r = {(x, y), x0 6 x 6 x0 + w, y0 6 y 6 y0 + h} can be done
using the surrounding areas as showed in Figure 3.15.
where the sum of the pixels of the rectangle D can be computed with four array refer-
ences as is demonstrated in [23]. The integral image at 1 corresponds to the summation
of all the pixels included in rectangle A. Just as the previous example, the value at 2 is
A + B, the value at 3 is A + C and finally the value at 4 is A + B + C + D. Therefore,
the sum within D can be done as 4 + 1 − (2 + 3). That means that the pixel sum over a
rectangle can be done regardless of the size, just taking into account the corners of the
rectangle.
RecSum(r) = ii(x0 + w, y0 + h) + ii(x0 , y0 ) − ii(x0 + w, y0 ) − ii(x0 , y0 + h)
| {z } | {z } | {z } | {z }
4 1 2 3
If a decision tree classifier is created taking into account each one of the feature value
computed over each area of the image, as in 3.14 for two terminal nodes or 3.15 for
three, where each fi will give +1 if the obtained value is inside a threshold pre-defined
and −1 otherwise.
(
+1, xi > ti
fi = (3.14)
−1, xi 6 ti
Chapter 2. Theoretical Aspects 28
(
+1, ti,0 6 xi < ti,1
fi = (3.15)
−1, else
It is important to notice that each classifier is not ready to detect an object by itself.
It notices simple feature in the image (like a border or a point). For instance, the eye
region is often darker than the cheeks, or the iris is darker than the rest of the eye.
(supposing a correct size and orientation of the feature as in Figure 3.16)
After the creation of these classifiers (called weak classifiers) a list of complex robust
classifiers is built out with the union of all the weak classifiers iteratively as a weighted
sum of weak classifiers, being each one increasingly more complex. Afterwards, a cascade
is created where first positions are for simple classifiers and final positions for the most
complex. As far as the window is scanned by each classifier, it can be rejected or sent
to the next Fi as in Figure 3.17 is explained.
Because Haar Cascades are based on statistical analysis, this method provides fast and
reliable results and it is not necessary to follow a learning process for the algorithm
because there are public trained databases for any feature (explained after in Chapter
5). These advantages make Haar Cascades the best option to implement the complete
location program.
The face database used for this project contains images of people in different conditions
and situations but all of them have in common that are portraits. That means that face
Chapter 2. Theoretical Aspects 29
Figure 3.17: Cascade of Haar. Each classifier can reject or pass to the next classifier
the image.
location is not a big deal because input images are just faces, removing the necessity
of looking for faces inside full body pictures. Despite of that, the application is ready
to recognise faces inside images and afterwards recognise internal features (eyes, mouth
and nose) making the solution wider and more complete.
Most of the techniques used for facial recognition assume the availability of ideal images
(good light location and intensity, uniform background, smooth variations in the ob-
ject’s shape, sufficient resolution, uncompressed, etcetera . . . ). However, real scenarios
are completely different: poor defined shapes, diffuse or moved objects and complex
backgrounds that force the use of previous processing (pre-processing) to acquire a bet-
ter image ready to extract face regions from the environment.
Probably this is one of the most exciting and difficult tasks in face recognition: to ex-
tract face coordinates with low probability of failure. Until the middle 90’s the face was
searched under highly controlled backgrounds (one shade). These methods included the
use of neuronal networks, skin texture or deform meshes based on facial characteristics.
Chapter 2. Theoretical Aspects 30
Among all the actual techniques used to detect faces the one that has been mostly em-
phasized is the rules codification that describes the characteristics of the face and its
relations (relative distances, positions or even areas described by a group of points).
This method presents two particularly difficulties: It is necessary to know beforehand
that there is a face in the scene and secondly the generation of too much false positives
in complex backgrounds [24].
Other alternatives have been studied to improve accuracy and quality rates like global
techniques based on localization of faces treating them as a pattern location problem as-
suming that the face is the pattern. When the templates are used, a pattern is searched
defined manually in the input image. That is, face and non face categories are deter-
mined by an expert. Methods based on appearance learn the templates from the training
images. In general, these methods use statistical analysis and automatic learning to find
the relevant features of the categories face and non face.
The disadvantage of algorithms based on templates is the difficulty for generalize the
algorithm to be used in different situations of images (illumination, spatial hiding or
position). To fix that, the expert have to define a high number of templates to cover any
possible situation and therefore represent any case. Both methods develop an exhaustive
search of patterns for any position or scale extracting the output region and classifying
the result with the classical methods.
The last group of methods provides the face location based on colour distribution tech-
niques. Because faces maintain a narrow colour band inside the colour space and most
information is concentrated in chrominance and not in luminance, this component can
be rejected using segmentation models: RGB, HSV, YCrCb, YIQ, CIE, XYZ [25]
These methods try to locate faces detecting facial characteristics like eyes, eyebrows,
mouth, face contour, etcetera... and then they are combined with other proceedings that
identify the relations and verify the existence of the face. These procedures are based
on the idea that humans are able to recognize faces in different positions or luminance
situations because there should be a group of distinctions or properties independent to
those variables.
Chapter 2. Theoretical Aspects 31
The eigenfaces method is probably the simplest and efficient method used for human face
recognition. Contrary to others algorithms based on the identification of gestures and
classification of distances between them, this method evaluates the image as a whole.
Thanks to this consideration, is feasible to use not only in prepared environments with
pre-defined light conditions but also outside, in the real world. Basically, this method is
supported on a simple concept: reduce useless information as much as possible. When
a picture is studied, even if it is small, there is a lot of information expressed inside.
Therefore, depending on the final purpose, the focus on some parameters will be high-
lighted over the rest.
When a general image is being analysed, is expected that most of the area of the image
will not represent a face. That is why is necessary a method to detect and extract hu-
man faces from the rest of the background as a fast and accurate procedure. The way
to do that is creating a base of faces and trying to represent any analysed image as a
combination of them. It is easy to compare this technique to the colours representation.
The base of colours is formed by primary red, green and blue. Thus, it is possible to
represent any colour as a partial addition of red, green and blue. For instance, orange
will be formed as a combination of the maximum value of red, half value of green and
nothing of blue.
Referring to eigenfaces, the problem becomes to find the precise collection of base faces
that represents the best a specific face. Taking advantage of the previous comparison,
the issue is how much quantity of red, blue and green paint does the painter need to
reproduce the colour he is already watching. It is important to notice that the precise
selection of the base faces will provide better results in the recognition step. The creation
and development of the mentioned face base is explained in detail in several publications
[26], [27], [28], taking into account that some information reduction has to be applied in
order to diminish the complexity of the original problem up to 1/1000.
Each face we want to introduce in the base to be classified can be projected into a
face space and then analysed as a vector. After that, any existent distance method like
Euclidean distance, k-nearest-neighbour, Mahalanobis distance or neural network can
be used to classify the input face. The problem is solved after this step, inducing next
Chapter 2. Theoretical Aspects 32
points to be effortless.
• Generation of eigenfaces.
Elastic Bunch Graph Matching is based on the idea that human face images have many
non-linear characteristics that are not supported by the linear analysis methods such as
variations in pose (standing straight versus leaning over), face expression (smile versus
sad) or illumination (outdoor lighting versus indoor fluorescents).
A Gabor wavelet transform (see [30] for more details) generates a dynamic link architec-
ture that projects the face onto an elastic mesh. The Gabor jet is a node on the elastic
grid (represented by circles) which determines the image behaviour around a given pixel.
It represents the result of a convolution of the face image with a Gabor filter, which is
applied to detect shapes and to extract relevant features by means of image process-
ing. The recognition process relies on the similarity of the Gabor filter response at each
Gabor node. This biologically-based methodology that processes Gabor filters is a pro-
cedure executed in the visual cortex of higher mammals. The main disadvantage of this
method lies on the necessity of use an accurate landmark localization, that often can be
achieved mixing PCA and LDA methods.
Chapter 2. Theoretical Aspects 33
For the programming of the application in this project, face location by global represen-
tation was chosen as the best option. This decision was taken first of all because input
images contain faces occupying almost all the image region and secondly because PCA
method was already implemented and tested making the programming less difficult and
more stable and accurate. Furthermore, eigenfaces is the most simple implementation
and the one used as the basis in complex software applications by specialized companies
and is the one that investigators use for their publications.
Chapter 4
Development of the application
For the complete development of this project, C++ language has been chosen. One of the
main requirements was to increase speed as much as possible, and C++ is ideal for high
performance applications because it works not only with high level functions but also
with low-level machine functions. C++ is a middle level language object oriented (OO)
with some features like classes management, class heritage and overloaded functions.
In the Figure 4.1 a comparison between C and C++ is performed showing that for
small-medium applications C++ is recommended, being better C for big applications
or high resources necessities [31]. For this project gcc v.4.3.2 i486-linux-gnu has
been used.
Other of the advantages C/C++ offers is portability to other systems. Because this pro-
gramming language is multi-platform, is possible to compile it in any Unix environment
(Linux, BSD, Solaris, Mac, etc...) or Windows environment. For this study, Ubuntu
8.10 - the Intrepid Ibex (released in October 2008) has been used.
34
Chapter 3. Development of the application 35
To make the usability easier towards the final user, a graphic interface has been designed
in Qt4. The Qt4 libraries were created originally by TrollTech 1 . In June 2008 Nokia
acquired Trolltech to enable the acceleration of their cross platform software strategy
cross-platform for mobile devices and desktop applications, and to develop its Inter-
net services business. On September 2008 Nokia renamed Trolltech to Qt Software.
In fact, Qt libraries are a very good solution to develop standard interfaces. Qt is a
cross-platform application framework. Using Qt, it is possible to develop applications
and user platform interfaces once, and deploy them across many desktop and embedded
operating systems without rewriting the source code. Some of the systems that base
part of the code in Qt are consuming electronics (mobile handsets, GPS navigators),
manufacturing (part of the Siemens manufacturing assembly line), medical (digital ra-
diology), entertainment (film production) or transportation (mobility systems). The
original idea of this project was to implement the algorithms in a standard laptop/PC
but it is interesting to mention that could be a feasible option to re-pack the application
and install it in embedded systems, PDAs or hand-helds to make the idea portable.
Qt platform is governed by the GNU General Public License version 2 and version 3 so
can be used freely as long as it is not used with commercial purposes. There are two
possible licenses for Qt4, the open source and commercial. For the followed targets on
this project, open source version has been selected.
1
More info: http://www.qtsoftware.com
Chapter 3. Development of the application 36
After the study of the Computer Vision basic algorithms and digital image processing
fundamentals, one of the first issues to deal with is the complexity of the functions to
be used to create a useful application. Because Computer Vision is on every robotic
engineer’s lips, several libraries of public distribution have been released amongst which
two of them stand out. One of the objectives of the project was to investigate which
of them are available and select the most adequate. As it is mentioned before, libraries
must be written in C++ in order to be included in the main program.
The CImg Library is a free C++ tool-kit providing simple classes and functions to load,
save process and display images in C++ code 2 . It consists only of a single header file
CImg.h that must be included in the program source. It contains useful image processing
algorithms for loading/saving, resizing/rotating, filtering, object drawing (text, lines,
faces, ellipses, ...). Images are instanced by a class able to represent images up to 4-
dimension wide (from 1-D scalar signals to 3-D volumes of vector-valued pixels), with
template pixel types. One of the big advantages of this libraries is that it depends on
a minimal number of libraries: it is easy to compile it with only standard C libraries.
There is no need for exotic libraries and complex dependencies.
The most popular free library mainly aimed at real time Computer Vision 3 . It in-
cludes a complete list of resources and manuals to interact fully with the libraries from
the main application, enhancing the possibilities and the expectations of the project.
This point and its easy merger with the graphical libraries Qt4 mentioned before made
OpenCV the best choice to implement the application for this project (the extension of
this library exceeds the limits of this project). Some example areas covered by OpenCV
would be Human – Computer Interaction (HCI), object identification, segmentation and
recognition, face recognition, gesture recognition, motion tracking, motion understand-
ing, Structure From Motion (SFM) and mobile robotics.
2
More info at: http://cimg.sourceforge.net/
3
more info at: http://sourceforge.net/projects/opencvlibrary/
Chapter 3. Development of the application 37
To make the application easy to use and more standard, OpenCV library was chosen
for the project. There are hundreds of websites with examples and FAQ’s to understand
how it works, its functions and problems. Therefore, this option made the integration
of Computer Vision algorithms and warping algorithms easy.
Because the entire software has been developed in C++, is straightforward to describe
the classes involved in the application and their distribution but the relationships and
connections are in some cases complex to be represented.
To describe it later on, a list with the classes with a briefly explanation about the purpose
of each class is presented in Table 4.2.1
In Figure 4.2 there is a representation a global view of the life cycle of the application.
In short, there upper-level classes are responsible for the user interface (creation of
Chapter 3. Development of the application 38
Figure 4.2: Schema with the classes and the links between them.
menus, buttons, list-views, user-dialogues or display alerts) while the bottom-level classes
manage the mathematical functions (open images, read them, warp them, compute
meshes, search for faces, merge files, ...). A complete schema with the methods and
classes connections can be found in Appendix C
The connection between the interface objects and the internal classes is done trough
slots and signals. The function that relate slots and signals is called connect and it is
defined as:
bool QObject::connect ( const QObject * sender, const char * signal, const QObject
* receiver, const char * method, Qt::ConnectionType type = Qt::AutoConnection )
An example extracted from [32] connects a label with a scrollbar. In this case, when the
user interacts with the scrollBar object, it will emit a signal called valueChanged(int)
Chapter 3. Development of the application 39
that has an argument int that contains the actual value of the scrollBar. The label will
display whatever is set in the only argument int of its function setNum(int). Because
this function is linked to the valueChanged(int), the label will represent directly the
value of the scrollBar.
With this method, it is possible to capture all the events or actions the user perform in
the application (clicks, drag drop, keys pressed, etcetera...) and treat them indepen-
dently.
The usage of the application is very simple. Once the user execute it, a menu will appear
offering three options in form of big icons 4 . Depending on the selection, three different
windows could appear.
4.3.1.1 Interface
The first option open a full screen window with menu bars, toolbars and a status bar
ready to open a batch of images and let the user to decide the node’s position in order
to generate a face triangular mesh and afterwards warp the image.
It is possible to customize the location and size of the menus in order to make a friendly
user environment. Furthermore, is possible to hide the right head with the help nodes
to save space and also to show/hide the mesh to make the image more clear (for full
information see User’s manual in Appendix D). The left toolbar is designed to let the
user to move the nodes grouped by face features (eyebrows, eyes, mouth, nose, etcetera...)
or individually. Furthermore, the mesh has been coloured in order to distinguish better
the different groups of nodes.
4
All the resources of the application have been downloaded from http://www.crystalxp.net( last
visited April. 10, 2009) and are free to use and distribute by the GPL
Chapter 3. Development of the application 40
The mesh is made up of 85 nodes distributed along the whole face forming a triangular
mesh. It was designed in order to enhance some important face features like eyes or
mouth 5 (for full information see Appendix E). That is the reason why there are more
nodes around this areas. The mesh can be modified easily creating new results high-
lighting other points/features. The level of detail of the deformation is directly linked
with the number of nodes of the mesh to be warped. As long as we increase the number
of nodes, the deformation will be more accurate and therefore more complex to perform
for the machine. In the following Figure 4.5 the mesh is displayed. The results obtained
of applying this mesh can be read later on Chapter 4.
4.3.2.1 Interface
The second option open a window with three main parts (see Figure D.6). The top
one ask the user to introduce the batch of images to analyse. Afterwards, some option
5
Originally designed by Dr. Rob Jeckins. Psychology Department. University of Glasgow (2009)
Chapter 3. Development of the application 41
Figure 4.4: Screenshot. Highlight of the different sections of the manual form: tool-
bars, menus and working area.
about the output image can be defined like the colour space (colour or grey scale) or the
final size. Finally pressing Create automatic average... button, the application will start
to open each one of the images and will use Haar Cascades to detect the required face
features. It is important to notice that if any of the 5 things searched for any face ( face,
left eye, right eye, mouth, nose) is not found, this face will be automatically rejected.
The reason for that is because the application cannot create a mesh if it unknowns the
position of any of the nodes involved (consequences of this decision will be discussed
later on Chapter 4).
Contrary to the previous mesh, the automatic mesh was designed to be less complex.
There were two different versions of this mesh. The first one was developed with 34
nodes and the second one was just 12 nodes. After some experiments it was decided
that the results did not vary so much between both meshes due to location of points like
Chapter 3. Development of the application 42
Figure 4.5: Triangular mesh designed for the creation of a manual warped image of
a face enhancing key features like eyes and mouth.
eyebrows or chin were poorly estimated in the 34-nodes mesh. As a consequence, the
12-nodes mesh has been implemented for the experiments. Both meshes can be seen in
Figure 4.7
Figure 4.7: Triangular meshes designed for the automatic average face generator.
Left:34-nodes mesh. Right: 12-nodes mesh.
One of the top features of the project is the warp algorithm implemented. After the
study of the most popular ways to deform images in Chapter 3, two of them where
implemented. Afterwards, they were compared to decide who was the optimal decision.
The algorithm was implemented following the steps described in [16] also explained
previously. As discussed in [17], the main advantage of this method is that gives the
user a high level of control over the process but it is necessary to select corresponding
feature lines, being more oriented to morphing rather than warping. Everything that
is specified is moved exactly as the animator wants it to move, and everything else is
blended smoothly based on those positions. There are two main problems to highlight:
[17].
• Speed All segments have to be referenced for every pixel. The number of oper-
ations is proportional to the number of lines times the number of pixels, making
the procedure consumes lots of computer resources.
• Quality Not always the result is as expected. The apparition of ghosts [15] or
undesirable artefacts can be produced.
The results of the implementation of this algorithm are presented on the next Figure
4.8. It is easy to notice that the deformations are inaccurate in some cases and the
Chapter 3. Development of the application 44
textures are poorly warped as a consequence of the previous problems (deformations are
not completely smooth because the definition of the lines is complex).
Figure 4.8: Results after the implementation of the algorithm transformation with
multiple pairs of lines.
After the implementation of the previous method, a different solution was need and that
is why this algorithm was written. The computation cost is close to mesh warping and
the complexity is lower. It is not necessary to use any mesh interpolation like Cat–Mull
Algorithm (full code in Appendix A) saving large computational cost.
The steps to follow are not complex. Taking advance of the properties of barycentric
coordinates in triangles, the algorithm will take for all the pixels in the destination
following the next steps:
Where the interpolation implemented to determine the location of the pixel in the orig-
inal image is based on the four areas created when x or y does not match with any
pixel position. A simple weight is done between these areas instead of using the nearest
Chapter 3. Development of the application 45
neighbour interpolation. To obtain the value of the pixel, the four surrounding pixels
share out their values proportionally, resulting:
P orig = Pi,j ·(Xright ·Ydown )+Pi+1,j ·(Xlef t ·Ydown )+Pi,j+1 ·(Xright ·Yup )+Pi+1,j+1 ·(Xright ·Ydown )
(4.1)
Figure 4.9 shows the names of the variables involved to facilitate the understanding of
the method applied.
Some examples of the results given by this algorithm can be shown in Figure 4.10 where
deformations are smoother and clear. The quality of the morphing for this method
depends directly on the level of detail and the size of the triangles. As long as the
quantity of triangles is increased, the deformation of each triangle is lower and therefore
the total transformation is less ”sharp”. Other of the advantages for this method is that
Chapter 3. Development of the application 46
the mesh is straightforward to deal with (it is much easier to imagine a mesh than a set
of lines representing outlines of the face ) allowing the user to change it comfortably,
intuitively and just modifying one file with the coordinates. Therefore, for the final
version of the project, the triangular mesh warping method was selected.
Figure 4.10: Results after the implementation of the algorithm triangular mesh warp-
ing. George Bush, Madonna and Tony Blair
Exactly as it was discussed previously, the pattern recognition was performed using
OpenCV libraries adapted to the project environment. The classes in charge of this
task are GraphWidget (manual performance), Detector (Haar cascades management)
and StoneFaceWidget (automatic performance). To do that, it is essential to build a
cascade’s file with a collection of positive and negative samples of frontal faces, left eyes,
right eyes, mouths and noses. Because it takes a lot of time to prepare this files, they
Chapter 3. Development of the application 47
have been downloaded from a specialized open source website 6 were the files are already
prepared and checked to be the best updated approach[34]. A list of the files included
in the project can be checked in the next Table 4.5.1
The complete code written to deal with Haar Cascades (extracted partially from [35],
[36])is very extensive to be copied directly here but the main points to understand are:
4. Create the cascade taking into account the feature to be recognized(mouth, nose,
eyes ...) using the expression cascade = (CvHaarClassifierCascade*)cvLoad
(cascadePathName, 0, 0, 0 );
6. Create a new rectangle to pick up the coordinates using the expression CvRect*
r = (CvRect*)cvGetSeqElem( faces, i); for each region found in the image.
7. Extract the image embedded in this rectangle using the previous data.
Some examples of the Haar Cascades results are displayed in the next figures. In Figure
4.11 an example of a clear frontal face is detected including all the features: eyes, node
and mouth. In Figure 4.12 an example of a frontal face is detected. This time the
subject wears glasses but both eyes have been found. In Figure 4.13 an example of
multiple people face detection is displayed.
Figure 4.11: Haar Cascade examples. Face location, face features location without
occlusions.
Figure 4.12: Haar Cascade examples. Face location, face features location with
glasses.
The last part of the project consist on the development of a simple face recognition
module to demonstrate if the results are or not valid. The OpenCV libraries include a
Chapter 3. Development of the application 49
Figure 4.13: Haar Cascade examples. Face location inside a multiple faces in the
laboratory.
module dedicated to PCA, that means, it is ready for eigenfaces applications. The code
used for this part was extracted based on a previous project 8 with several modifications
in order to be compatible with the actual project application.
5. Find the closest distance between the subspace and the projected face listings
8
Check http://www.codeproject.com/KB/cpp/TrackEye.aspx
Chapter 5
Results and Discussion
In this chapter, some studies and experiments will be discussed to understand if the
average of a face increase the accuracy measurement of a face recognition software.
Furthermore, several comparisons have been run to determine if an automatic average
face can reach the quality and recognition rate of a manual average face.
5.1 Database
To apply the algorithms implemented during the previous episodes, a batch of 500 male
celebrity images has been selected. All of them have been picked up under several light
conditions, different periods of their lives and variant face gestures. All the images are
190x190 pixels in JPEG format grey-scale and contain busts. They have been extracted
from public websites and were provided to this project by Dr. Rob Jeckins 1 . Some of
this samples can be displayed in Figure 5.1
Several experiments have been done to demonstrate that the average face of a person is
better than any random single face by itself. Begining experiments compare the recog-
nition rates of manual average faces versus automatic average faces to understand the
level of similarity between them. Afterwards, some experiments compare the recognition
rates between automatic average faces and random single faces. The generated faces are
attached in Appendix F and some of them are also displayed in Figure 5.2. It is possible
1
Psychology Department. University of Glasgow (2009)
50
Chapter 4. Results and Discussion 51
Figure 5.1: Example of some single faces introduced in the face database to do the
experiments.
Chapter 4. Results and Discussion 52
to difference between automatic and manual output images and, in conjunction with the
following studies, the results achieved with this project. Finally, time comparisons and
the evolution of average faces will be done but before starting to compare the warped
images, a simple study is done between a warped average face and a non-warped average
face, to demonstrate that the deformation of the images to a frontal position before
making the average operation is one of the key points of this investigation.
Figure 5.2: Average images generated by the application. Left images are manual
average faces and right images are automatic average faces.
Chapter 4. Results and Discussion 53
For the first experiment, a group of non-warped faces has been created and a face
recognition algorithm has been applied to them (see Figure 5.3 for some samples and
Appendix F for the complete list of faces). To make the experiment complete, the
number of faces used to generate the non-warped average is increased to understand the
behaviour of the algorithm.
Table 5.1: Results of recognition rate in non-warped faces. The number of images
used to generate the average changes
Obviously, as long as the number of faces in increased (Table 5.1), the output generated
is more fuzzy an less similar to a face and therefore the recognition rate is smaller. This
is the reason why the deformation followed in this project enhance the chances for a face
to be recognized.
Those results represent that the deformation of the images before the creation of an
average is essential for the correct subsequent recognition.
The first experiment includes 500 faces (20 images x 25 people) in the database and
consist on the next steps:
1. Generate an automatic average faces for each celebrity with the photos included
in the database.
2. Generate a manual average faces for each celebrity with the photos included in the
database.
3. Run the eigenfaces algorithm for all the average images generated.
The results of the experiment are displayed in Table 5.2 and surprisingly return that the
percentage of accuracy of the automatic average face is greater than the manual average.
The high accuracy of the results in both cases demonstrates that the average face can
be a very good representation for artificial face recognition. On the other hand, the
reason why the automatic average seems to be better than the manual one is that some
of the images generated automatically include just two or three images of each person
because the rest have been rejected due to lack of information (shadows that hide eyes,
moustaches, etcetera... ) while in the case of manual faces, all of the generated faces
have been created from 20 images of each celebrity.
Table 5.2: Manual average face and automatic average face recognition rates for the
first experiment
The second experiment includes 375 faces (15 images x 25 people) in the database and
consist on the next steps:
1. Generate an automatic average faces for each celebrity with the remaining 5 photos
not included in the database.
2. Generate a manual average faces for each celebrity with the remaining 5 photos
not included in the database.
Chapter 4. Results and Discussion 55
3. Run the eigenfaces algorithm for all the average images generated.
Table 5.3: Manual average face and automatic average face recognition rates
Once again, the automatic faces generated are closer to the generated database than
the manual faces (Table 5.3). The recognition rate has decreased because the faces to
be recognised are not included in the database and therefore the generated face is not a
combination of database faces.
The third and forth experiments correspond to a change in the point of view of the prob-
lem. If instead of trying to recognise average faces in a single-face database, the average
faces are the ones that create the database, results may change. In this case, the average
faces create a subspace of images, reducing substantially the number of images inside the
database (×20).The difference between both experiments is that in one the single images
are part of the images used to generate the average and in the other one are not included.
Table 5.4: Database created with average faces using single included images to be
recognised
The results obtained developing this method decrease the recognition rate significantly
smaller as shown in Table 5.4.
Chapter 4. Results and Discussion 56
5.2.2.4 Forth experiment: Database formed with averages. Single faces not
included
4. Try to recognise single images (5) not included in the new database.
The results obtained developing this method decrease the recognition rate drastically as
shown in Table 5.5 due to results in previous section and the no including of recognizing
faces to create the average.
Table 5.5: Database created with average faces using single included images to be
recognised
Other interesting aspect to compare both kind of averages (manual and automatic) is
studying the correlation between both. That is, see how many automatic faces are recog-
nised in a database of manual faces and vice versa.
The results in previous Table show that there is not very different to create the database
in one way or in the other because results are very close. That means, the level of
recognition of one batch compared from the other one is high and therefore both groups
of images are close (in terms of subspace generated by each one of the batches).
Chapter 4. Results and Discussion 57
In this second part, a different aspect will be examined: the difference between search
a single face or an average face inside a database. For this section, instead of the PCA
method used before to recognise the faces, an external application is going to be used.
As in [37], the website www.myheritage.com will be used. This website has an on-
line application for face recognition as mentioned in Chapter 2 that detects faces inside
photos and displayed the closest celebrities with a percentage of similarity.
5.2.3.1 First experiment: Study comparing single faces with automatic av-
erages including non-matched faces
The first experiment compares the submission of a random single face and the submission
of an automatic average face. For this, all the non-rejected images used to generate the
automatic faces have been submited and search in www.myheritage.com. In Figure
5.4 the results are displayed including all the faces that the website failed to recognise.
Examining the mentioned Figure, there are 5 average face that where not recognised
(George Clooney, Jack Nicholson, Will Smith, Sean Connery and Tom Cruise) while
for the rest of the faces the average is more probable to be recognised(12/19) than the
average single image (7/19) and just in one submission.
Chapter 4. Results and Discussion 58
Figure 5.4: Results comparing the recognition rate between a single face and the
automatic average face (1).
Chapter 4. Results and Discussion 59
Now if the failed hits are removed, is possible to compare the probability of a face to be
recognised being an automatic average or just a random single face.
Figure 5.5: Results comparing the recognition rate between a single face and the
automatic average face (2).
Examining now Figure 5.5, it is concluded that when the probability of a single image
to be zero is reached, the probability of the average face is still 53,68%. That makes
the average strong and more persistent against changes than a single image (as it was
expected).
Now the failed hits are removed to understand how relevant they are for the results.
Looking at Figure 5.6, where the failed hits have been removed for both variables, the
average probability is increased slightly because failed hits were more detrimental to the
average face than for the individual faces (54,88%).
To demonstrate that the average face can be a better choice in order to recognise faces,
non-frontal faces of each celebrity were introduced in the website to be recognised. The
Chapter 4. Results and Discussion 60
Figure 5.6: Results comparing the recognition rate between a single face and the
automatic average face (3).
Chapter 4. Results and Discussion 61
results show that statistically 11 of 24 non-frontal images are not recognized while for
average faces the numbers decreases until 4.
The results obtained developing this method decrease the single face recognition rate
drastically as shown in Table 5.2.3.4 due to non frontal images are extremelly hard to
be recognise for a single images collection (lack of information of one side of the face
decreases the overall data about the face).
The main advantage of the automation of the warping algorithm is to save great amount
of time. Because the nodes’s positioning is hard work and extremely slow, the automatic
average face becomes more interesting than the manual average face. To understand the
quantities of time expended for both cases, a simple experiment have been done. Time
has been measured for both using the cputime() function in C/C++ which returns the
actual time of the computer when the function is called. The statistical results of this
experiment are displayed in Table 5.2.4 for 20 faces.
In this section, the behaviour of the recognition rate will be tested increasing the number
of images used to create the average image as in [39]. In Figure 5.7 the evolution of the
average can be seen. As long as the number of images that compound the average is
increased, the recognition rate increases from 63.5% to 96.2% as in Table 5.2.
Chapter 4. Results and Discussion 62
Figure 5.7: Results comparing the recognition rate between a single face and the
automatic average face (3).
Chapter 4. Results and Discussion 63
The most important problem of using automatic face averaging is the number of images
rejected in the process. As mentioned before, in the case that in an image both eyes,
nose and mouth are not recognised, this image is removed from the average image. This
assumption make manual images contain more information than the automatic faces.
For this project, the average of images used for each face was 7.37 / 20 (36.9%).
As an overall, the final Figure that resumes the recognition rates obtained during the
last experiments is displayed in Figure 5.8. It is evident that automatic average faces is
a very good estimator of the complete face of a person.
6.1 Conclusions
In the present work, a new average facial generator has been designed combining triangle-
mesh warping and Haar Cascades. A basic face recognition system has been devel-
oped using PCA and eigenfaces to determine the quality of the results obtained. To
complete the investigation, an external face recognition tool provided by http://www.
myheritage.com has been used in order to contrast the results.
1. The deformation of the faces before generating the average face increases rapidly
the recognition rate because of the control of the position of the face features: eyes,
mouth and nose.
2. The number of images used to create the average affects radically to the recognition
rate. It increases when new images of the individual are added while creating the
average .
3. When a face image is non-frontal, has badly light conditions or low resolution the
average improves the recognition rate.
4. The time spent for the generation of a manual average face is 265 times the time
necessary by the solution presented. Because 40 minutes to generate a manual
average face is too long, this innovation makes the idea of automatic average faces
to be feasible and useful.
64
Chapter 5. Conclusions and future works 65
5. The correlation between manual and automatic faces is fairly close, providing both
similar results being automatic average even better in several cases.
After the development of the application and the complete solution to the original
planted problem, some future works can be done. Because the application has been
developed in C++, all the posterior ideas should be implemented in that way, avoiding
complex parsing functions or new bugs. To improve the application, some new ideas can
be added and implemented.
• Increase the number of features of the face for the automatic average face. To do
that, new Haar Cascades are required, including eyebrows, chin, etcetera...
• Implement the mesh warping algorithm with the Cat-Mull interpolation attached
in Appendix A to check if the recognition ratio increases.
• Add new features to the graphic interface such as parameter options configuration,
image format management, configuration registry to save the user preferences,
improved image management based on thumbnails, etcetera...
• Optimize the source code to gain speed in hard functions like warping or Haar-like
feature’s search.
Cat-Mull algorithm in C
/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* catmullRom:
*
* Compute a Catmull-Rom spline passing through the len1 points in arrays
* x1, y1, where y1 = f(x1)
* len2 positions on the spline are to be computed. Their positions are
* given in x2. The spline values are stored in y2.
*/
void
catmullRom(x1, y1, len1, x2, y2, len2)
float *x1, *y1, *x2, *y2;
int len1, len2;
{
int i, j, dir, j1, j2;
double x, dx1, dx2;
double dx, dy, yd1, yd2, p1, p2, p3;
double a0y, a1y, a2y, a3y;
66
Appendix A. Cat–Mull Algorithm. Mesh interpolation 67
else dir = 1;
} else { /* decreasing */
if(x2[0]>x1[0] || x2[len2-1]<x1[len1-1]) dir=0;
else dir = -1;
}
if(dir == 0) { /* error */
printf("catmullRom: Output x-coord out of range of input\n");
return;
}
I S · cos(H)
R= √ · 1+
3 cos(60o − H)
I(1 − S)
B= √
3
√
G= 3·I −R−B
S · cos(H − 120o )
I
G= √ · 1+
3 cos(180o − H)
I(1 − S)
R= √
3
√
B= 3·I −R−B
S · cos(H − 240o )
I
B= √ · 1+
3 cos(300o − H)
I(1 − S)
G= √
3
√
R= 3·I −R−B
69
Appendix C
Schema of the Application
70
Appendix C. Schema of the Application 71
Appendix D
User’s manual
StoneFace will allow you to create manual and automatic average faces and also to recog-
nise them in a database. To do that, you just need to select what you want to do in the
main menu and start working! The main menu is divided in three independent parts:
two for face creation and a last one to detect a face as you can see in the next figure.
1. Manual This method will offer you the possibility of designing node by node the
positions of the triangular mesh edges. You will be able to modify one by one the
images and decide which are the most important features you want to highlight
(such as the eyes, nose or mouth). The main disadvantage of this method is the
time expended. You will need approximately (depends on the level of detail and
the patience of the person who is producing the images) about two minutes to
create a warped image.
72
Appendix D. User’s Manual 73
The last of the three options of the main menu. This form will allow you to recognise a
face using eigenfaces and a face database created previously.
This first part of the application has been designed to involve in the process the user as
much as possible. For this reason, all the menus and options have been decided to be
clear and accurate. As soon as we select this option in the main menu, a complete new
environment will be displayed.
Contains all the options that the software offers. It is divided in 6 parts.Each one fo-
cused on a different area.
• File Open a batch of images, save the actual one, close the actual image and also
close the application.
• View Show the mesh lines inside the picture and display the help face on the right
area of 4
• Operations When the user wants to create an average or see how the actual mesh
has been deformed yet.
• Images Control of the batch of images loaded. Move to the next face or return
to the previous one.
This toolbar contains the top used commands in order to save time navigating by the
menus of the previous section. The options are displayed in Table D.2.2
The purpose of this secondary toolbar is to play with the nodes when an image is loaded.
To facilitate the management of the nodes, this toolbar has been divided in three differ-
ent modules:
With the buttons provided, the user can move the selected node up, down,
right and left 5 pixels each time. Pressing directly the keyboard keys ”up”
”down” ”left” or ”right” the same effect will be done.
Appendix D. User’s Manual 75
Icon Description
With the first four buttons, the user can move the selected group of nodes
up, down, right and left 5 pixels each time. With the next two buttons, the
group will turn clock or anticlockwise. With the last two buttons, the distance
between nodes in the group can be increased or decreased.
Groups are created by colour and they are provided in the table 1. Pressing
directly the keyboard keys ”a” for left movement, ”d” for right movement,
”w” for up movement or ”s” for down movement the same effect would be
done
Appendix D. User’s Manual 76
Colour Description
Blue Outline ( jaw, ears, pinna, temples, scalps and crown)
White Chin
Orange Left eyebrow
Turquoise Right eyebrow
Pink Right eye
Yellow Left eye
Red Mouth
Green Nose
Violet Other landmarks like cheeks or forehead
With these four buttons the user can move all the nodes at the same time.
This option is very useful when the face is displaced inside the image a certain
distance.
Pressing directly the buttons ”f” for left movement, ”h” for right movement,
”t” to move all nodes up and ”g” to move them down.
This zone corresponds to the area where the user interacts with the application during
most of the time. Here the face image is plotted on the left and (optionally) a help
face is represented on the right to understand the nodes selected. Because this area is a
graphic area, some extra features can be done.
1. If the user launched the software from a terminal, the nodes position can be dis-
played in the terminal pressing key ”m”.
All nodes can be moved using the toolbar, keyboard or dragging and dropping them!
This bar indicates the tips and descriptions of each one of the objects inside the window
and other messages as warnings or progress.
Appendix D. User’s Manual 77
Figure D.3: Example of a real face processing. The left area contains the desired
warping image while the right one contains the help face with a red node advising us
the last node selected
These messages will appear just during a period of two or three seconds to alert the
user about a new situation. If an important error happen or an question is generated,
the software will create a message dialog to interact with the user. This messages are
described in the next point.
During the life of the application several messages can appear on the screen. Each one
of them has a different purpose. Most relevant are enumerated in the following list with
a short explanation.
Appendix D. User’s Manual 78
Message Why?
Original mesh file %1 is miss- When doing an average of a group of warped images, it’s
ing. Cannot read the file. necessary to have in the same path as each image a file with
the coordinates of the nodes of the warped mesh. For each
face, the name will be originalname.jpg-warped.dat in
case there is a previous mesh and pos.dat by default.
Help mesh file %1 is missing! It’s necessary to have in the same path as the application
Cannot read the file. a file with coordinates of the nodes for the help face on the
right. The name is posHelpNodes.dat
Frontal mesh file %1 is miss- It’s necessary to have in the same path as the application
ing! Cannot read the file. the file with coordinates of the nodes for the frontal mesh.
The name is pos.dat
Triangles file %1 is missing! It’s necessary to have in the same path as the application
Cannot read the file. the file with coordinates of the nodes for the frontal mesh.
The name is pos.dat
Error saving mesh file: %1 Do It is strange, but sometimes an error saving the image could
you have rights to do that? be produced. Normally this error is due to lack of space
in the hard disk, network problems or lack of privileges to
write in a certain folder
Fatal error warping the mesh. When processing a warping process is possible to find some
It seems that there are some nodes out of the image range. Normally that situation hap-
points of the mesh that cannot pens when the user drop the nodes out of the image region.
be reached. Exiting... This problem causes a fatal error and exits the program.
Are you sure you want to leave When the user tries to close the application a message will
the actual work and exit? appear to ensure that this is the desired selection. All the
work that wasn’t saved will be lost.
Are you sure you want to When the user click on Close menu, an alert will ask if this
remove actual work without is the desired option.
saving?
Congratulations! You have When the user finishes the whole batch of images, the appli-
reached the end of the list of cation understands that the user wants to create an average.
pictures! That is why this option is offered afterwards.
Cannot read file %1:%2. If the user tries to open a corrupt image, this alert will advise
the causes and the file involved in the problem.
There is a previous mesh for When the user selects an image that was already warped,
this image, do you want to the application offers to resume the state of the nodes, re-
load it? Mesh found! allocating them in the position readed from the file. This
file will be originalname.jpg-warped.dat
The second module of the application consist on a simple form where first the user is
asked for a batch of images as an input. Then some information is displayed about the
images selected and afterwards some output parameters can be selected.
Appendix D. User’s Manual 79
This is the first step. When the button Search for face files.. is pressed an external
window will appear waiting for a selection of the images to load. In the next figure the
dialog is displayed. It is important to notice that is possible to select more than one
image at the same time. That means, it is multi- selection.
The program offers as an option to modify the standard conditions of the output image
generated.
• Size By default the images will be generated at 180x180 if the text boxes are
blank, otherwise the introduced values will be taken into account.
Appendix D. User’s Manual 80
• Grey scale It is possible to convert the output image into a grey scale image.
The grey scale algorithm used is just as the average of R + G + B for each pixel.
• Save path A button that launches a dialog window to select the name of the
image that is going to be created.
During the scanning of images and the automated process, some information of each
face is displayed such as the position of the image in the list, the number of colours
and the size of the original image. In the figure 6 an example of real time process is
demonstrated.
This image contains the original image before it is warped. Just resizing is done.
This image contains the final individual image after it is warped automatically. It is
easy to compare them and find the differences for each face.
D.3.6 Progress
The progress bar represents the number of faces processed over the total number of
faces in the list. This object can display easyly the real state of the process and internal
subroutines.
The last module of the application is designed to complete the investigation. Here the
user can provide a face database and the software will find by itself the closest face inside
the database introducing a given face. The database must be introduced following a valid
eigenfaces list format (xml file with standard tags).
Appendix D. User’s Manual 81
Introduce the path of the database file. Some information about the eigenfaces stored
inside the database is presented, also the number of images and the date of creation of
the database.
Introduce the path of the face to recognise after pressing the button Select face. Some
information about the image is presented (number of colours, size, number of bytes,
etc...)
Appendix D. User’s Manual 82
Once the image is selected and the corresponding button is pressed (Recognise this face)
and the application has found the closest face, some information about the individual
will be displayed (age, profession, real name or link to find more information)1 .
This image contains the resulting image. This image will match with the closest person
of the database.
1
All this information has been extracted from www.wikipedia.org
Appendix E
Manual face mesh
83
Appendix E. Details of the manual face mesh 84
REGION1 Outline
Number Description X Y
1 Chin centre 186 522
2 Chin left 148 505
3 Chin right 219 506
4 Jaw left 85 472
5 Jaw right 287 472
6 Ear left 42 369
7 Ear right 330 366
8 Pinna left 37 301
9 Pinna right 337 310
10 Temple left 41 258
11 Temple right 333 257
12 Scalp left 51 100
13 Scalp right 309 97
14 Crown 190 59
REGION2 Mouth
Number Description X Y
15 BlipB C 185 438
16 BlipB L 156 434
17 BlipB R 214 435
18 BlipT C 186 422
19 BlipT L 152 419
20 BlipT R 215 419
21 TlipB C 187 415
22 TlipB L 169 413
23 TlipB R 203 413
24 TlipT C 187 406
25 TlipT L 169 404
26 TlipT R 203 404
27 Lips left 136 417
28 Lips right 241 419
REGION3 Nose
Number Description X Y
29 Septum 187 369
30 Left nostril 164 360
31 Right nostril 210 363
32 Left flange 145 353
33 Right flange 228 353
34 Tip 186 345
35 Left valley 160 334
36 Rightvalley 214 332
37 Bridge 188 271
Appendix E. Details of the manual face mesh 85
REGION4 Eyes
Number Description X Y
38 Lpupil 118 268
39 LirisT O 105 266
40 LirisT I 133 265
41 LlidL O 105 280
42 LlidL I 133 279
43 LlidT O 94 261
44 LlipT I 118 254
45 Louter fold 78 275
46 Ltear duct 153 276
47 Lbag outer 118 296
48 Lbag inner 148 285
49 Rpupil 260 268
50 RirisT O 276 265
51 RirisT I 245 265
52 RlidL O 276 280
53 RlidL I 245 278
54 RlidT O 286 262
55 RlipT I 260 254
56 Router fold 300 276
57 Rtear duct 227 278
58 Rbag outer 261 297
59 Rbag inner 234 289
REGION5 Eyebrows
Number Description X Y
60 Lbrow inner 157 247
61 Lbrow TI 148 237
62 Lbrow TM 118 229
63 Lbrow LI 145 249
64 Lbrow LM 118 239
65 Lbrow outer 74 249
66 Rbrow inner 222 246
67 Rbrow TI 229 235
68 Rbrow TM 261 228
69 Rbrow LI 235 247
70 Rbrow LM 260 238
71 Rbrow outer 309 248
REGION6 Landmarks
Number Description X Y
72 Chin tip 186 487
73 Chin top 186 462
74 Chin left 147 472
75 Chin right 224 472
76 Line left 113 385
77 Line right 255 385
78 Cheek left 84 334
79 Cheek right 285 337
80 Forehead centre 190 199
81 Forehead left 105 185
82 Forehead right 270 185
Appendix F
Complete list of average faces
86
Appendix F. Complete list of average faces 87
Appendix F. Complete list of average faces 88
Appendix F. Complete list of average faces 89
Bibliography
[2] D., Panica, S., Banciu, D.; Negru, V.; Eckstein, A. Optical Character Recognition
on a Grid Infrastructure. Petcu: Tables 1,2 ,p. 23, 2008.
[5] W. W. Bledsoe. The model method in facial recognition. Panoramic Research Inc.,
Palo Alto, CA. August, 1966.
[6] Rana Islam (National Policing Improvement Agency). Facial Image National
Database (FIND) Police Standard for Still Digital Image Capture and Data
Interchange of Facial/Mugshot and Scar, Mark Tattoo Images. p. 9–28 May,
2007. http://www.npia.police.uk/en/docs/Capture_interchange_standard_
Facial_SMT_images.pdf last visited April. 10, 2009.
[7] Robert O’Harrow. Matching Faces with Mugshots: Software for Police, Others Stir
Privacy Concerns. Washington Post, July 31, 2001.
[8] P. J. Phillips, H. Moon, P. J. Rauss, and S. Rizvi. The FERET Evaluation Method-
ology for Face-Recognition Algorithms. IEEE Transactions on Pattern Analysis and
Machine Intelligence, Vol. 22, No. 10. p. 1090–1104, 2000.
[10] R. C. Gonzalez and R. E. Woods. Digital Image Processing 2nd Edition. Prentice
Hall, New Jersey, 2n d edition, 2002.
[13] Castleman, Kenneth R. Digital Image Processing 1st ed., Chap.21 p 550–554, 1996.
[14] Tang Yang; Wu Hui-zhong; Xiao Fu; Xiao Liang. Inverse image warping without
searching. Control, Automation, Robotics and Vision Conference, Issue , p. 386–390
Vol. 1, 6-9 Dec. 2004.
[15] George Wolberg. Recent Advances in Image Morphing. Proc. Computer Graphics
Intl.,p. 64–71. Pohang, Korea, June 1996.
[16] Thaddeus Beier, Shawn Neely. Feature-Based Image Metamorphosis. Proc. of the
19th Annual Conference on Computer Graphics and Interactive Techniques, Vol. 26,
p. 35–42, 1992.
[18] Hui Li, Hui Lin, Guang Yang. A New Facial Expression Analysis System Based
on Warp Image Proceedings of the 6th World Congress on Intelligent Control and
Automation (WCICA) Vol.2, p. 10045–10049, June 2006
[19] Ngo Quoc Tao, Nguyen Due Long. A novel approach for image morphing. IEEE
Asia-Pacific Conference on Circuits and Systems p. 97–100, December 2004.
[20] Su Zang, Hanfeng Chen, Pengfei Shi Mosaic and Warping for Forward Moving
Images Proceedings of the Computer Graphics International 2001 p. 363–366, July
2001.
[21] Castleman, Kenneth R. Digital Image Processing 1st ed., Chap.8 p. 127, 1996.
[22] Pratt, William K. PixelSoft Inc. Digital Image Processing.4th ed. 2007 Fundaments
of Image Processing,p. 419–664.
[23] Viola, P., Jones, M. Rapid Object Detection using a Boosted Cascade of Simple
Features Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings
of the 2001 IEEE Computer Society Conference Vol. 1, p. 511–518, 2001.
Bibliography 92
[24] A. Jacquin and A. Eleftheriadis. Automatic location tracking of faces and facial
features in video signal. Proceedings of International Workshop on Automatic Face
and Gesture Recognition p. 142–147, 1995.
[25] D. Chai and K. Ngan. Locating facial region of head and shoulders color image.
Proceedings of Automatic Face and Gesture Recognition p. 124–129, 1998.
[26] Matthew A. Turk, Alex P. Pentland Face Recognition Using Eigenfaces IEEE Conf.
on Computer Vision and Pattern Recognition, p. 586–591, 1991.
[27] Matthew A. Turk, Alex P. Pentland EigenFaces for recognition Journal of Cognitive
Neuroscience Vol 3, number 1. p. 71–84, 1991.
[28] Thomas Heseltine, Nick Pears, Jim Austin. Evaluation of image pre-processing
techniques for eigenface based face recognition The Proceedings of the Second In-
ternational Conference on Image and Graphics, SPIE Vol. 4875, p. 677–685, 2002.
[29] J. Lu, K.N. Plataniotis, and A.N. Venetsanopoulos. Regularized Discriminant Anal-
ysis For the Small Sample Size Problem in Face Recognition Pattern Recognition
Letters Vol. 24, Issue 16, p. 3079–3087, 2003.
[30] Laurenz Wiskott, Jean-Marc Fellous, Norbert Krger, Christoph von der Malsburg.
Face Recognition by Elastic Bunch Graph Matching Intelligent Biometric Tech-
niques in Fingerprint and Face Recognition Chapter 11, p. 355–396, 1999.
[31] Brad Calder, Dirk Grunwald, Benjamin Zorn. Quantifying behavioral between C
and C++ programs. Journal of Programming Languages Vol.2, p. 313–351, 1996.
[32] Matthias Kalle Dalheimer. Programming with Qt. O’Reilly UK, 2002.
[35] Gary Bradski, Adrian Kaehler. Learning OpenCV. Computer Vision with the
OpenCV Library. O’Reilly, 1s t Ed. 2008, p. 506–515
[36] Scott Hassan, Steve Cousins, Eric Berger, Keenan Wyrobek Willow Garage.
Robotic Research Institute. http://opencv.willowgarage.com last visited April.
13, 2009
Bibliography 93
[37] R. Jenkins, A.M. Burton. 100% Accuracy in Automatic Face Recognition. Science
Magazine Vol. 319, p. 435, January 2008
[38] R. Jenkins, A.M. Burton, D. White. Face Recognition from Uncostrained Images:
Progress with Prototypes. Automatic Face and Gesture Recognition (FGR 2006)
7th International p. 25–30, April 2006
[39] A.M. Burton,R. Jenkins, Peter J.B Hancock, D. White. Robust representations for
face recognition: The power of averages. Congnitive Psychology Vol.51 Issue 3, p.
256–284, November 2005
[40] Walter Mora F., Alex Borbón A. Edición de textos cientı́cos en LATEX. Revista
Digital Matemática Educación e Internet http://www.cisde.itcr.ac.cr
Source Code - StoneFace v.1.0
94
// Master Thesis - StoneFace v.1.0 beta C by Jorge Garcia Bueno
// Universidad Carlos III de Madrid + University of Glasgow 2009
// All rights reserved by the author
// --------------------------------------------------------------
// automaticwidget.h - Definitions of automaticwidget.cpp
#ifndef AUTOMATICWIDGET_H
#define AUTOMATICWIDGET_H
#include <QtGui>
#include <QPixmap>
#include <QGraphicsView>
#include <QListView>
#include <QDialog>
#include <QPushButton>
#include <QDesktopWidget>
#include "mainwindow.h"
#include "stonefacewidget.h"
#include "mixeraut.h"
class QAction;
class QMenu;
class QGraphWidget;
class WelcomeWidget;
class MixerAutomatic;
class StoneFaceWidget;
private:
// MainWindow handle
QMainWindow *mainWin;
QProgressBar *pb;
QPixmap *actualPic;
QPixmap *warpedPic;
QLabel *instructionsLbl;
QLabel *instructionsLbl2;
QLabel *numFaces;
QLabel *mixInfo;
QCheckBox *cb;
QPushButton *buttonStartMix;
QPushButton *buttonStartSearch;
QPushButton *buttonLookPath;
QLabel *face1;
QLabel *face2;
StoneFaceWidget *widget;
MixerAutomatic *mixer;
QLineEdit *imgname;
QLineEdit *imgw;
QLineEdit *imgh;
void StartInterface();
void MoveCenterScreen();
QGridLayout *lLayout;
private slots:
void CreateMix();
void SearchFiles();
void LookPath();
public:
AutomaticWidget(QMainWindow *parent);
void ShowStatus();
public slots:
// Status
void ProgressBarSetValue(int value);
void SetStatus(QString,int,int,int,int,int,int,int, QImage, QImage);
void zoneScanned(QString);
};
#endif
#include "automaticwidget.h"
AutomaticWidget::AutomaticWidget(QMainWindow *parent)
{
// Remember the handle of the parent for the returning function
mainWin = parent;
void AutomaticWidget::CreateMix()
{
if (mixer != NULL && widget != NULL)
mixer->Start(imgname->text(),imgw->text().toInt(),imgh->text().toInt(),cb->isChecked() );
delete widget;
delete mixer;
}
void AutomaticWidget::MoveCenterScreen()
{
move((screenWidth-width)/2,(screenHeight - height)/2);
}
void AutomaticWidget::StartInterface()
{
instructionsLbl = new QLabel("Please, in order to create an automatic face, it is neccesary to select
a list of faces from your hard disk or network drive.\n Remember that all the files must be stored in the
same folder location.", this);
instructionsLbl->setWordWrap(true);
instructionsLbl2 = new QLabel("Select the options for the output image and mesh. Default values are 18
0x180 @ 256 colors", this);
instructionsLbl2->setWordWrap(true);
numFaces = new QLabel("Number of images selected: 0");
QString mixInfoL("");
mixInfoL += "Information:\nPath: ...\n";
mixInfoL += "Progress: \n";
mixInfoL += "Depth: \n";
mixInfoL += "Number of Colors: \n";
mixInfoL += "Number of bytes: \n";
mixInfoL += "Width: - Height: ";
mixInfo = new QLabel(mixInfoL);
pb = new QProgressBar(this);
cb = new QCheckBox("Grey Scale",this);
imgname = new QLineEdit(this);
imgw = new QLineEdit(this);
imgh = new QLineEdit(this);
imgname->setText("average.jpg");
imgw->setText("190");
imgh->setText("285");
lLayout = new QGridLayout;
lLayout->addWidget(instructionsLbl,0,0,1,4);
lLayout->addWidget(buttonStartSearch,1,0,1,2);
lLayout->addWidget(numFaces,1,2,1,1);
// ------------------------------------------
lLayout->addWidget(instructionsLbl2,2,0,1,4);
lLayout->addWidget(cb, 3 ,0,1,1);
lLayout->addWidget(buttonStartMix,5,0,1,2);
// ------------------------------------------
lLayout->addWidget(mixInfo, 6 ,0,3,2);
lLayout->addWidget(face1,6,2,3,1);
lLayout->addWidget(face2,6,3,3,1);
lLayout->addWidget(pb, 9, 0,1,4);
setLayout(lLayout);
}
void AutomaticWidget::SearchFiles()
{
int c;
c = mixer->SearchFiles();
numFaces->setText(QString("Number of images selected: ") + QString::number(c));
}
QString info("");
QPixmap qpx;
//repaint();
info += "Information:\nPath: ...";
info += a.right(20) + "\n";
info += "Progress: " + QString::number(order) + "/" + QString::number(total) + "\n";
info += "Depth: " + QString::number(depth) + "\n";
info += "Number of Colors: " + QString::number(numColors) + "\n";
info += "Number of bytes: " + QString::number(numBytes) + "\n";
info += "Width: " + QString::number(width) + " - Height: " + QString::number(height);
mixInfo->setText(info);
face1->setPixmap(qpx.fromImage(original.scaled(100,150)));
face2->setPixmap(qpx.fromImage(morphed.scaled(100,150)));
pb->setValue(100*order/total);
update();
void AutomaticWidget::LookPath()
{
QString fileName = QFileDialog::getSaveFileName(this, tr("Select path to save the average file"),
"./", tr("Images (*.png *.xpm *.jpg *.bmp)"));
if (fileName.length() != 0)
imgname->setText(fileName);
#ifndef CAMCAP_H
#define CAMCAP_H
#include "/usr/include/opencv/cv.h"
#include "/usr/include/opencv/highgui.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include <math.h>
#include <float.h>
#include <limits.h>
#include <time.h>
#include <ctype.h>
#include "eigenfaces.h"
#include "mixeraut.h"
class EigenFaces;
class cameraCapture
{
public:
cameraCapture();
int captureWindow();
int detect_and_draw( IplImage* img );
private:
QString pathSnap;
EigenFaces *eigen;
int numFacesUsed;
void lookUpFace();
CvCapture* capture;
};
#endif
#include "cameraCapture.h"
cameraCapture::cameraCapture()
{
eigen = new EigenFaces(NULL);
eigen->SetDB("./me.xml");
numFacesUsed = 0;
}
int cameraCapture::captureWindow()
{
capture = 0;
// Images to capture the frame from video or camera or from file
IplImage *frame, *frame_copy;
capture = cvCaptureFromCAM(-1);
// Create a new named window with title: result
cvNamedWindow( "result", 1 );
//cvNamedWindow( "face", 0 );
// Find if the capture is loaded successfully or not.
// Check the origin of image. If top left, copy the image frame to frame_copy.
//if( frame->origin == IPL_ORIGIN_TL )
//cvCopy( frame, frame_copy, 0 );
// Else flip and copy the image
//else
//cvFlip( frame, frame_copy, 0 );
// Check whether the cascade has loaded successfully. Else report and error and quit
if( !cascade )
{
fprintf( stderr, "ERROR: Could not load classifier cascade\n" );
// Find whether the cascade is loaded, to find the faces. If yes, then:
if( cascade )
{
// There can be more than one face in an image. So create a growable sequence of faces.
// Detect the objects and store them in the sequence
CvSeq* faces = cvHaarDetectObjects( img, cascade, storage,
1.1, 2, CV_HAAR_DO_CANNY_PRUNING,
cvSize(40, 40) );
// Loop the number of faces found.
for( i = 0; i < (faces ? faces->total : 0); i++ )
{
// Create a new rectangle for drawing the face
CvRect* r = (CvRect*)cvGetSeqElem( faces, i );
//cvCopy(img,imageface);
cvResetImageROI(img);
//lookUpFace();
//imageface = cvLoadImage(pathSnap.toLatin1().data());
if ( numFacesUsed == 80)
{
QStringList qsl;
for (int io = 0;io<80;io++)
qsl << QString("./faces/face%1.jpg").arg( io);
}
}
void cameraCapture::lookUpFace()
{
eigen->Recognise(pathSnap.toLatin1().data(), true);
}
#ifndef DETECTOR_H
#define DETECTOR_H
class Detector
{
private:
// Image to be treated
IplImage* image;
void ImagePreProcessForDetection();
public:
// Function to detect the elements that returns the coordinates of two points
void DetectAndDraw(char, int* , int* , int* , int* );
// Constructor of the object
Detector(IplImage*);
};
#endif
#include "detector.h"
Detector::Detector(IplImage* ima)
{
image = ima;
}
void Detector::ImagePreProcessForDetection()
{
cvEqualizeHist( image, image );
}
void Detector::DetectAndDraw(char cascadeType, int* px, int* py, int* imHeight, int* imWidth )
{
switch (cascadeType)
{
case 'F': // Face location
{ //haarcascade_frontalface_alt_tree
cascadeName = (char*)"./cascades/haarcascade_frontalface_alt2.xml";
break;
}
case 'R': // Right eye location
{
cascadeName = (char*)"./cascades/right_eye.xml";
break;
}
case 'L': // Left eye location
{
cascadeName = (char*)"./cascades/left_eye.xml";
break;
}
case 'M': // Mouth location
{
cascadeName = (char*)"./cascades/mouth.xml";
break;
}
case 'N': // Nose location
{
cascadeName = (char*)"./cascades/nose.xml";
break;
}
// Check whether the cascade has loaded successfully. Else report and error and quit
if( !cascade )
{
fprintf( stderr, "ERROR: Could not load classifier cascade\n" );
return;
}
// Find whether the cascade is loaded, to find the faces. If yes, then:
if( cascade )
{
// There can be more than one face in an image. So create a growable sequence of faces.
// Detect the objects and store them in the sequence
CvSeq* faces = cvHaarDetectObjects( image, cascade, storage,
1.1, 5, CV_HAAR_DO_CANNY_PRUNING);//,
//cvSize(40, 40) );
// Check if the case if nose selection to update with the most centered
if (cascadeType == 'N')
{
*px = pxSelected; *py = pySelected; *imWidth = wSelected; *imHeight = hSelected;
}
// Release the image
cvReleaseImage(&image);
#ifndef EIGEN_H
#define EIGEN_H
#include <stdio.h>
#include <string.h>
#include <QtGui>
#include <QPixmap>
#include <QGraphicsView>
#include <QListView>
#include <QDialog>
#include <QPushButton>
#include <QDesktopWidget>
#include "/usr/include/opencv/cv.h"
#include "/usr/include/opencv/cvaux.h"
#include "/usr/include/opencv/highgui.h"
#include "searchwidget.h"
#include "detector.h"
class SearchWidget;
private:
// Parent widget
QWidget *searchwidget;
// Database file path
char *faceDB;
QString *faceDBS;
QString *faceDBinfo;
void GetInfoDB();
void DoPCA();
void StoreTrainingData();
int LoadTrainingData(CvMat ** pTrainPersonNumMat);
int FindNearestNeighbor(float * projectedTestFace, double *distance, int *percentage);
int LoadFaceImgArray(char * filename);
void Learn();
void SaveJustTheFace(QString pathImg,QString pathResized);
void CreateRecogniseList(char* pathOfImage, bool);
QString GetInfo(int);
public:
EigenFaces(QWidget *parent);
void SetDB(char *);
void Recognise(char* pathOfImage, bool);
signals:
#endif
#include "eigenfaces.h"
EigenFaces::EigenFaces(QWidget *parent)
{
searchwidget = parent;
faceImgArr = 0;
void EigenFaces::GetInfoDB()
{
//int nEigen = 0;
int nFaces = 0;
char line[30];
FILE* f=fopen(faceDB,"r");
if (!f) {
fprintf(stderr,"Can't open database to get the info: %s\n", faceDB);
exit(1);
}
/*
<?xml version="1.0"?>
<opencv_storage>
<nEigens>41</nEigens>
<nTrainFaces>42</nTrainFaces>
*/
int x = 0;
x = fscanf(f,"%s\n",line);
x = fscanf(f,"%s\n",line);
x = fscanf(f,"%s\n",line);
x = fscanf(f,"%s\n",line);
fclose(f);
nFaces = ggg->toInt();
//printf("nFaces: %d string %s\n", nFaces, ggg->toLatin1().data());
QFileInfo *fileInfo = new QFileInfo(QString(faceDB));
IplImage *ImageFace = 0;
Detector *dFace;
int px,py,imHeight,imWidth;
ImageFace = cvLoadImage(pathImg.toLatin1().data());
// Creation of the detector and obtaining of the coordinates for the face
dFace = new Detector(ImageFace);
dFace->DetectAndDraw('F',&px,&py,&imHeight,&imWidth);
printf(" px = %d py = %d h = %d w = %d\n", px,py,imHeight,imWidth);
// Now we just want the face scaled with the coordinates changed and work with result
// Not only creates a list with the image for being recognized but also detects the face in big images
// and extract it from the image for being recognized later... :D
void EigenFaces::CreateRecogniseList(char* pathOfImage, bool aloneInTheImage)
{
FILE * f = NULL;
QString pathresized("/tmp/resized.jpg");
//pathresized = QString(pathOfImage) + QString("-resized.jpg") ;
if (aloneInTheImage == false)
{
// Extract the face from the image and resize
SaveJustTheFace(pathOfImage, pathresized);
}
else
{
QImage *resizedImg = new QImage(pathOfImage);
resizedImg = new QImage(resizedImg->scaled(90,90,Qt::IgnoreAspectRatio,Qt::SmoothTransformation));
resizedImg->save(pathresized);
}
// open the input file
if( !(f = fopen((char*)"_recog.tmp", "w")) )
{
fprintf(stderr, "Can\'t write face list file %s\n Space problems? Rights problems?\n", (char*)"_re
cog.tmp");
exit( 0);
}
int ret;
fclose(f);
}
//////////////////////////////////
// recognize()
//
void EigenFaces::Recognise(char* pathOfImage, bool aloneInTheImage)
{
int i, nTestFaces = 0; // the number of test images
CvMat * trainPersonNumMat = 0; // the person numbers during training
float * projectedTestFace = 0;
double distanceNearest;
int per = 0; // Percentage of similarity
FILE* f=fopen(faceDBinfo->toLatin1().data(),"r");
int z = 0, x = 0;
char fileName[128];
fclose (f);
printf("FileName: %s\n",fileName);
emit (NearestImage(fileName));
//emit(InfoNearest(GetInfo(iNearest)));
}
}
int k = 0;
QString line;
QString pathInfoSubjects(*faceDBinfo);
pathInfoSubjects.chop(9);
pathInfoSubjects.append(QString("-extra.txt"));
QFile file(pathInfoSubjects);
if (!file.open(QIODevice::ReadOnly | QIODevice::Text))
return QString("Error loading info!");
QTextStream in(&file);
while (k < position+1)
{
line = in.readLine();
k++;
}
printf("captured: %s\n",line.toLatin1().data());
QStringList list1 = line.split(" ");
QString infoFaceL("");
infoFaceL += "Information of the matched face:\n";
infoFaceL += "Name: " + QString(list1[0].replace(QString("_"),QString(" "))) + " " + QString(list1[1].
replace(QString("_"),QString(" "))) + "\n";
infoFaceL += "Age: " + QString(list1[2]) + "\n";
infoFaceL += "Gender: " + QString(list1[3]) + "\n";
infoFaceL += "Info: " + QString(list1[4].replace(QString("_"),QString(" "))) + "\n";
infoFaceL += "Web: " + QString(list1[5]);
return infoFaceL;
//////////////////////////////////
// loadTrainingData()
//
int EigenFaces::LoadTrainingData(CvMat ** pTrainPersonNumMat)
{
CvFileStorage * fileStorage;
int i;
return 1;
}
//////////////////////////////////
// storeTrainingData()
//
void EigenFaces::StoreTrainingData()
{
CvFileStorage * fileStorage;
int i;
//////////////////////////////////
// findNearestNeighbor()
//
int EigenFaces::FindNearestNeighbor(float * projectedTestFace, double *distance, int *percentage)
{
//double leastDistSq = 1e12;
double leastDistSq = DBL_MAX,maxDist=0;
int i, iTrain, iNearest = 0;
//////////////////////////////////
// doPCA()
//
void EigenFaces::DoPCA()
{
int i;
CvTermCriteria calcLimit;
CvSize faceImgSize;
//////////////////////////////////
// loadFaceImgArray()
//
int EigenFaces::LoadFaceImgArray(char * filename)
{
FILE * imgListFile = 0;
char imgFilename[512];
int iFace, nFaces=0;
int x = 0;
if( !faceImgArr[iFace] )
{
fprintf(stderr, "Can\'t load image from %s\n", imgFilename);
return 0;
}
}
fclose(imgListFile);
return nFaces;
}
void EigenFaces::Learn()
{
int i, offset;
DoPCA();
printf("PCA done!\n");
// project the training images onto the PCA subspace
projectedTrainFaceMat = cvCreateMat( nTrainFaces, nEigens, CV_32FC1 );
offset = projectedTrainFaceMat->step / sizeof(float);
for(i=0; i<nTrainFaces; i++)
{
//int offset = i * nEigens;
cvEigenDecomposite(
faceImgArr[i],
nEigens,
eigenVectArr,
0, 0,
pAvgTrainImg,
//projectedTrainFaceMat->data.fl + i*nEigens);
projectedTrainFaceMat->data.fl + i*offset);
}
#ifndef GRAPHWIDGET_H
#define GRAPHWIDGET_H
#include <QtGui/QGraphicsView>
#include "node.h"
#include "detector.h"
class Node;
public:
GraphWidget();
void itemMoved();
QImage currentImage();
void nodeMoved(int nodeNumber, qreal newX,qreal newY);
void savePicture(const QString &fileName);
int loadPicture(const QString &fileName, char* origMesh, char* destMesh,bool autodetection);
int updatePicture(bool showInWindow);
void downloadAddMesh(QList<qreal>*);
void printEdges();
void removeEdges();
void nodeClicked(int nodeNumber);
void refreshPoints();
int detectElements(bool stretch);
void helpImageVisible(bool);
protected:
void keyPressEvent(QKeyEvent *event);
void timerEvent(QTimerEvent *event);
void wheelEvent(QWheelEvent *event);
void drawBackground(QPainter *painter, QImage &face);
double A[3][3];
double in[3][3], alpha, beta, gamma;
double Apx,Apy,Bpx,Bpy,Cpx,Cpy,Xp,Yp;
double detInv;//n is the determinant of A
private:
QPixmap *fa1,*fa2,*faRef;
long lineLength(int A1, int B1, int A2, int B2);
double areaOfTriangle(int Ax, int Ay, int Bx, int By, int Cx, int Cy);
int timerId;
int lastClicked;
bool showEdges;
bool insideTriangle(int Ax, int Ay, int Bx, int By, int Cx, int Cy, int Px, int Py);
QGraphicsPixmapItem *face1,*faceReference;
QImage *imf1,*imf2;
QGraphicsScene *superScene;
Node** hnode;
QGraphicsLineItem** qline;
QString path;
void showWarpedWindow(QImage* &pic);
int centerGroupX;
int centerGroupY;
bool showlines;
public slots:
void turnGroupLeft();
void turnGroupRight();
void increaseGroupDistance();
void decreaseGroupDistance();
};
#endif
#include "graphwidget.h"
#include <QDebug>
#include <QGraphicsScene>
#include <QWheelEvent>
#include <QtGui>
#include <QPixmap>
#include <QGraphicsView>
#include <math.h>
#include <QPushButton>
#include <QSplashScreen>
GraphWidget::GraphWidget()
: timerId(0)
{
centerGroupX = 0;
centerGroupY = 0;
showlines = false;
}
void GraphWidget::itemMoved()
{
if (!timerId)
timerId = startTimer(1 / 25);
}
aNodes[nodeNumber][0] = (int)newX;
aNodes[nodeNumber][1] = (int)newY;
switch (event->key())
{
// Key Down -> move the last clicked node 2 pixels down
case Qt::Key_Down:
{
displaceNodeDown();
break;
}
// Key Left -> move the last clicked node 2 pixels left
case Qt::Key_Left:
{
displaceNodeLeft();
break;
}
// Key Right -> move the last clicked node 2 pixels right
case Qt::Key_Right:
{
displaceNodeRight();
break;
}
// Key A -> move the "color group" of last clicked node 2 pixels left
case Qt::Key_A:
{
displaceGroupLeft();
break;
}
// Key A -> move the "color group" of last clicked node 2 pixels right
case Qt::Key_D:
{
displaceGroupRight();
break;
}
// Key S -> move the "color group" of last clicked node 2 pixels down
case Qt::Key_S:
{
displaceGroupDown();
break;
}
// Key W -> move the "color group" of last clicked node 2 pixels up
case Qt::Key_W:
{
displaceGroupUp();
break;
}
case Qt::Key_P:
case Qt::Key_M:
foreach (Node *node, nodes)
fprintf(stderr,"%d %d %c\n",(int)node->posX(),(int)node->posY(),node->getColor());
break;
default:
QGraphicsView::keyPressEvent(event);
}
void GraphWidget::removeEdges()
{
showlines = false;
}
}
QImage GraphWidget::currentImage()
{
return *imf2;
}
pic->save("temp-warp.jpg");
IplImage* tempx = cvLoadImage("temp-warp.jpg", -1);
cvNamedWindow("Temporal warped image", 1);
cvShowImage( "Temporal warped image", tempx);
path = fileName;
fa1 = new QPixmap(fileName);
imf1 = new QImage(fa1->toImage());
*fa1 = fa1->scaled(380,570,Qt::IgnoreAspectRatio,Qt::SmoothTransformation);
*imf1 = imf1->scaled(380,570,Qt::IgnoreAspectRatio,Qt::SmoothTransformation);
*fa2 = fa2->scaled(380,570,Qt::IgnoreAspectRatio,Qt::SmoothTransformation);
*imf2 = imf2->scaled(380,570,Qt::IgnoreAspectRatio,Qt::SmoothTransformation);
superScene->addItem(face1);
superScene->addItem(faceReference);
face1->setPos(0,0);
faceReference->setPos(380,0);
// Load the original mesh and check if there is any input error
FILE* f=fopen(origMesh,"r");
FILE* g=fopen("posHelpNodes.dat","r");
if (!f)
{
if (!g)
{
int x, y;
char color;
// Fill the array with the values of each line until the end
for(int i=0; i<nNodes; i++)
{
// Read the final mesh and check if there is any input error
FILE* f3=fopen(destMesh,"r");
if (!f3)
{
exit(-1);
}
int nNodes2;
char c;
// Fill the array with the values of each line until the end
for (int k=0;k<nNodes2;k++)
z = fscanf(f3,"%d %d %c\n",&aNodes2[k][0],&aNodes2[k][1], &c);
// Read the points of the triangles and check if there is any input error
FILE* f4=fopen("triangles.dat","r");
if (!f4)
{
int yu=0;
// Fill the array with the values of each line until the end
for (int k=0;k<nTri;k++)
{
z = fscanf(f4,"%d %d %d\n",&aTri[k][0],&aTri[k][1],&aTri[k][2]);
qline[yu]= new QGraphicsLineItem(aNodes[aTri[k][0]][0],aNodes[aTri[k][0]][1],aNodes[aTri[k][1]]
[0],aNodes[aTri[k][1]][1]);
superScene->addItem(qline[yu]);yu ++;
qline[yu]= new QGraphicsLineItem(aNodes[aTri[k][1]][0],aNodes[aTri[k][1]][1],aNodes[aTri[k][2]]
[0],aNodes[aTri[k][2]][1]);
superScene->addItem(qline[yu]);yu ++;
qline[yu]= new QGraphicsLineItem(aNodes[aTri[k][0]][0],aNodes[aTri[k][0]][1],aNodes[aTri[k][2]]
[0],aNodes[aTri[k][2]][1]);
superScene->addItem(qline[yu]);yu ++;
refreshPoints();
void GraphWidget::refreshPoints()
{
}
void GraphWidget::savePicture(const QString &fileName)
{
QString pathImageWarped;
if (fileName.length() == 0)
{
pathImageWarped.insert(0,path.mid(0,path.length()-4));
pathImageWarped = pathImageWarped.insert(path.length()-4,"-warped.jpg");
}
else
pathImageWarped.insert(0,fileName);
//fprintf(stderr,"%s",pathImageWarped.toLatin1().data());
imf2->save(pathImageWarped);
FILE* f=fopen(pathMesh.toLatin1().data(),"w");
if (!f)
{
// Ignore the help nodes of the right picture (only for help pourpose)
if (node->helpNode == false) nodes << node;
fprintf(f,"86\n");
fclose(f);
}
// the lenght of this list must be the same as 2*the number of nodes in the mesh
if (list->size() == 68)
{
list->replace(i,list->value(i) + node->posX());
list->replace(i+1,list->value(i+1) + node->posY());
i = i + 2;
}
}
}
scale(scaleFactor, scaleFactor);
}
void GraphWidget::increaseGroupDistance()
{
int ii=0;char colorGroup='.';
int minpX = 1000, minpY = 1000;
}
void GraphWidget::turnGroupRight()
{
if (node->getColor() == colorGroup)
{
// turn the group 15 degrees clockwise
}
}
void GraphWidget::turnGroupLeft()
{
int hyp = 0 ;
float theta = 0.0;
foreach (Node *node, nodes)
if (node->getColor() == colorGroup)
{
// turn the group 15 degrees clockwise
}
}
void GraphWidget::decreaseGroupDistance()
{
void GraphWidget::displaceNodeDown()
{
// Creation of a list with the nodes of the Scene to interact with
int ii=0;
void GraphWidget::displaceNodeUp()
{
// Creation of a list with the nodes of the Scene to interact with
int ii=0;
void GraphWidget::displaceNodeLeft()
{
// Creation of a list with the nodes of the Scene to interact with
int ii=0;
foreach (Node *node, nodes)
if (ii++ == lastClicked) node->setPos(node->posX() - 2, node->posY());
}
void GraphWidget::displaceNodeRight()
{
// Creation of a list with the nodes of the Scene to interact with
void GraphWidget::displaceGroupLeft()
{
// Creation of a list with the nodes of the Scene to interact with
int ii=0;
char colorGroup='.';
void GraphWidget::displaceGroupRight()
{
// Creation of a list with the nodes of the Scene to interact with
int ii=0;
char colorGroup='.';
void GraphWidget::displaceGroupDown()
{
// Creation of a list with the nodes of the Scene to interact with
int ii=0;
char colorGroup='.';
void GraphWidget::displaceGroupUp()
{
// Creation of a list with the nodes of the Scene to interact with
int ii=0;
char colorGroup='.';
void GraphWidget::displaceAllLeft()
{
// Creation of a list with the nodes of the Scene to interact with
void GraphWidget::displaceAllRight()
{
// Creation of a list with the nodes of the Scene to interact with
void GraphWidget::displaceAllUp()
{
void GraphWidget::displaceAllDown()
{
// Creation of a list with the nodes of the Scene to interact with
}
int GraphWidget::updatePicture(bool showInWindow)
{
}
flag = false;
}
}
//Inverse....
//Now
//Bicubic interpolation:
dxLeft = Xp - int(Xp);
dxRight = 1 -dxLeft;
dyUp = Yp - int(Yp);
dyDown = 1 - dyUp;
// Create a color with the values obtained and fill the pixel
QColor color(RedInterpolated, GreenInterpolated, BlueInterpolated);
imf2->setPixel(i,j,color.rgb());
refreshPoints();
if (pbd->wasCanceled() == true) return 1;
pbd->setValue((int)(100*j/imf1->height() +1));
}
}
lastClicked = nodeNumber;
// Face Coordinates
int faceX, faceY, faceW, faceH;
// Mouth Coordinates
int mouthX, mouthY, mouthW, mouthH;
// Nose Coordinates
int noseX, noseY, noseW, noseH;
// ####################################################################################
// -------------------- First: Face recognition --------------------------------------
// ####################################################################################
((char*)subImageFace->imageData)[i*imf1->width()*3 + j + 0] = qBlue(imf1->pixel(h,i)); // b
lue channel
((char*)subImageFace->imageData)[i*imf1->width()*3 + j + 1] = qGreen(imf1->pixel(h,i)); // gr
een channel
((char*)subImageFace->imageData)[i*imf1->width()*3 + j + 2] = qRed(imf1->pixel(h,i)); // re
d channel
h++;
}h=0;
}
//cvSaveImage("face.jpg",subImageFace);
//Creation of the detector object and obtention of the coordinates
dFace = new Detector(subImageFace);
dFace->DetectAndDraw('F',&px,&py,&imHeight,&imWidth);
// Fill the face boundary variables. Throught the face boundaries the rest of the elements will be pro
vided
faceX = px; faceY = py; faceW = imWidth; faceH = imHeight;
// ####################################################################################
// -------- Second: Both eyes recognition upper half face -------------------
// ####################################################################################
// Copy the information between both image formats. Only the face region is needed to locate the eye s
quares
h=faceX;
for (int i = faceY ; i < faceY + faceH*3/4;i++)
{
for (int j = 3* faceX; j < (faceX + faceW)*3; j+=3)
{
((char*)subImageEyes->imageData)[i*imf1->width()*3 + j + 0] = qBlue(imf1->pixel(h,i)); // b
lue channel
((char*)subImageEyes->imageData)[i*imf1->width()*3 + j + 1] = qGreen(imf1->pixel(h,i)); // gr
een channel
((char*)subImageEyes->imageData)[i*imf1->width()*3 + j + 2] = qRed(imf1->pixel(h,i)); // re
d channel
h++;
}h=faceX;
}
// Fill the eyes boundary variables. Throught the face boundaries the eyes will be provided
eyesX = px; eyesY = py; eyesW = imWidth; eyesH = imHeight;
// ####################################################################################
// -------- Third: Left eye recognition inside left half eyes region ------------------
// ####################################################################################
// Copy the information between both image formats. Only the face region is needed to locate the eye s
quares
h=eyesX;
for (int i = eyesY ; i < eyesY + eyesH;i++)
{
for (int j = 3* eyesX; j < (eyesX + eyesW/2)*3; j+=3)
{
((char*)subImageEyeL->imageData)[i*imf1->width()*3 + j + 0] = qBlue(imf1->pixel(h,i)); // b
lue channel
((char*)subImageEyeL->imageData)[i*imf1->width()*3 + j + 1] = qGreen(imf1->pixel(h,i)); // gr
een channel
((char*)subImageEyeL->imageData)[i*imf1->width()*3 + j + 2] = qRed(imf1->pixel(h,i)); // re
d channel
h++;
}h=eyesX;
}
nodei = 0;
// ####################################################################################
// -------- Fourth: Right eye recognition inside left upper quarter -------------------
// ####################################################################################
// Copy the information between both image formats. Only the face region is needed to locate the eye s
quares
h = eyesX + eyesW/2;
for (int i = eyesY ; i < eyesY + eyesH;i++)
{
for (int j = 3*(eyesX + eyesW/2); j < 3*(eyesX + eyesW); j+=3)
{
((char*)subImageEyeR->imageData)[i*imf1->width()*3 + j + 0] = qBlue(imf1->pixel(h,i)); //
blue channel
((char*)subImageEyeR->imageData)[i*imf1->width()*3 + j + 1] = qGreen(imf1->pixel(h,i)); // g
reen channel
((char*)subImageEyeR->imageData)[i*imf1->width()*3 + j + 2] = qRed(imf1->pixel(h,i)); // re
d channel
h++;
}h=eyesX + eyesW/2;
}
// cvSaveImage("right.jpg",subImageEyeR);
dEyeR = new Detector(subImageEyeR);
dEyeR->DetectAndDraw('R',&px,&py,&imHeight,&imWidth);
superScene->addRect((qreal)px,(qreal)py,(qreal)imWidth,(qreal)imHeight,*(new QPen(*(new QColor(0,0,25
5)))));
fprintf(stderr,"Coordn eye Right %d %d wid %d hei %d\n",px,py,imWidth,imHeight);
// Move the nodes where we think they should be... important, an offset of 0.05*size is added
nodei = 0;
// Copy the information between both image formats. Only the face region is needed to locate the eye s
quares
h=faceX;
for (int i = faceY + faceH*1/2 ; i < faceY + faceH;i++)
{
for (int j = 3*faceX; j < (faceX + faceW)*3; j+=3)
{
((char*)subImageMouth->imageData)[i*imf1->width()*3 + j + 0] = qBlue(imf1->pixel(h,
i)); // blue channel
((char*)subImageMouth->imageData)[i*imf1->width()*3 + j + 1] = qGreen(imf1->pixel(h,i)); //
green channel
((char*)subImageMouth->imageData)[i*imf1->width()*3 + j + 2] = qRed(imf1->pixel(h,i)); // r
ed channel
h++;
}h=faceX;
}
cvSaveImage("mouth.jpg",subImageMouth);
dMouth = new Detector(subImageMouth);
dMouth->DetectAndDraw('M',&px,&py,&imHeight,&imWidth);
superScene->addRect((qreal)px,(qreal)py,(qreal)imWidth,(qreal)imHeight, *(new QPen( * (new QColor(0,2
55,255)))));
fprintf(stderr,"Coordn mouth %d %d wid %d hei %d\n",px,py,imWidth,imHeight);
// Move the nodes where we think they should be... important, an offset is added
nodei = 0;
if (stretch == false)
foreach (Node *node, nodes)
{
if (nodei == 26) node->setPos(px + 0.05 * imWidth, py + 0.50 * imHeight);
if (nodei == 27) node->setPos(px + 1.00 * imWidth, py + 0.50 * imHeight);
if (nodei == 23) node->setPos(px + 0.50 * imWidth, py + 0.10 * imHeight);
if (nodei == 14) node->setPos(px + 0.50 * imWidth, py + 1.00 * imHeight);
if (nodei == 24) node->setPos(px + 0.35 * imWidth, py + 0.05 * imHeight);
if (nodei == 25) node->setPos(px + 0.65 * imWidth, py + 0.05 * imHeight);
if (nodei == 18) node->setPos(px + 0.20 * imWidth, py + 0.50 * imHeight);
if (nodei == 19) node->setPos(px + 0.80 * imWidth, py + 0.50 * imHeight);
if (nodei == 21) node->setPos(px + 0.35 * imWidth, py + 0.20 * imHeight);
if (nodei == 20) node->setPos(px + 0.50 * imWidth, py + 0.20 * imHeight);
if (nodei == 22) node->setPos(px + 0.65 * imWidth, py + 0.20 * imHeight);
if (nodei == 17) node->setPos(px + 0.50 * imWidth, py + 0.50 * imHeight);
if (nodei == 20) node->setPos(px + 0.50 * imWidth, py + 0.25 * imHeight);
if (nodei == 15) node->setPos(px + 0.25 * imWidth, py + 1.00 * imHeight);
if (nodei == 16) node->setPos(px + 0.75 * imWidth, py + 1.00 * imHeight);
nodei++;
}
else
{
centerX = px + imWidth / 2;centerY = py + imHeight/2;
foreach (Node *node, nodes)
{
if (nodei == 26) node->setPos(centerX - 52, centerY - 4);
if (nodei == 27) node->setPos(centerX + 52, centerY - 3);
if (nodei == 23) node->setPos(centerX + 0, centerY - 16);
if (nodei == 14) node->setPos(centerX + 0, centerY + 16);
if (nodei == 24) node->setPos(centerX - 17, centerY - 18);
if (nodei == 25) node->setPos(centerX + 17, centerY - 18);
if (nodei == 18) node->setPos(centerX - 32, centerY - 3);
if (nodei == 19) node->setPos(centerX + 32, centerY - 3);
if (nodei == 21) node->setPos(centerX - 17, centerY - 9);
if (nodei == 20) node->setPos(centerX + 0, centerY - 7);
if (nodei == 22) node->setPos(centerX + 17, centerY - 9);
if (nodei == 17) node->setPos(centerX + 0, centerY + 0);
if (nodei == 20) node->setPos(centerX + 0, centerY - 7);
if (nodei == 15) node->setPos(centerX - 29, centerY + 16);
if (nodei == 16) node->setPos(centerX + 29, centerY + 16);
nodei++;
}
}
// ####################################################################################
// -------- Sixth: Nose recognition inside the center side -------------------
// ####################################################################################
return 0;
subImageNose = cvCreateImage(cvSize(imf1->width(), imf1->height()), IPL_DEPTH_8U, 3);
// Copy the information between both image formats. Only the face region is needed to locate the eye s
quares
h=faceX;
if (mouthY == 0 ) mouthY = faceY + faceH;
for (int i = eyesY + eyesH ; i < mouthY;i++)
{
for (int j = 3*faceX; j < (faceX + faceW)*3; j+=3)
{
((char*)subImageNose->imageData)[i*imf1->width()*3 + j + 0] = qBlue(imf1->pixel(h,i)); //
blue channel
((char*)subImageNose->imageData)[i*imf1->width()*3 + j + 1] = qGreen(imf1->pixel(h,i)); // g
reen channel
((char*)subImageNose->imageData)[i*imf1->width()*3 + j + 2] = qRed(imf1->pixel(h,i)); // re
d channel
h++;
}h=faceX;
}
//cvSaveImage("nose.jpg",subImageNose);
dNose = new Detector(subImageNose);
dNose->DetectAndDraw('N',&px,&py,&imHeight,&imWidth);
superScene->addRect((qreal)px,(qreal)py,(qreal)imWidth,(qreal)imHeight, *(new QPen( * (new QColor(12
7,0,127)))));
fprintf(stderr,"Coordn nose %d %d wid %d hei %d\n",px,py,imWidth,imHeight);
// Move the nodes where we think they should be... important, an offset of 0.10*size is added
nodei = 0;
if ((px != 0) and (py != 0))
{
if (stretch == false)
{
foreach (Node *node, nodes)
{
node->calculateForces();
//if (nodei == 6) node->setPos((faceX+faceW),midYeyeR );
nodei++;
}
}
else
{
centerX = px + imWidth / 2;centerY = py + imHeight/2;
foreach (Node *node, nodes)
{
if (nodei == 31) node->setPos(centerX - 42, centerY + 2);
if (nodei == 32) node->setPos(centerX + 42, centerY + 2);
if (nodei == 34) node->setPos(centerX - 27, centerY - 18);
if (nodei == 35) node->setPos(centerX + 27, centerY - 18);
if (nodei == 33) node->setPos(centerX + 0, centerY - 6);
if (nodei == 29) node->setPos(centerX - 23, centerY + 11);
if (nodei == 30) node->setPos(centerX + 23, centerY + 11);
if (nodei == 28) node->setPos(centerX + 0, centerY + 18);
if (nodei == 28) node->setPos(centerX + 0, centerY + 18);
if (nodei == 36) node->setPos((rEyeMediumX + lEyeMediumX)/2, (rEyeMediumY + lEyeMediumY)/
2);
nodei++;
}
}
}
return 0;
void GraphWidget::printEdges()
{
showlines = true;
refreshPoints();
removeEdges();
for (int i = 0 ; i < 3; i++)
for (int k = 0; k < nTri; k ++)
{
superScene->addLine(aNodes[ aTri[k][0]][0], aNodes[ aTri[k][0]][1],
aNodes[ aTri[k][1]][0], aNodes[ aTri[k][1]][1],QPen( QColor(0,255,0) ) );
superScene->addLine(aNodes[ aTri[k][0]][0], aNodes[ aTri[k][0]][1],
aNodes[ aTri[k][2]][0], aNodes[ aTri[k][2]][1],QPen( QColor(0,255,0) ) );
superScene->addLine(aNodes[ aTri[k][1]][0], aNodes[ aTri[k][1]][1],
aNodes[ aTri[k][2]][0], aNodes[ aTri[k][2]][1],QPen( QColor(0,255,0) ) );
}
}
double GraphWidget::areaOfTriangle(int Ax, int Ay, int Bx, int By, int Cx, int Cy)
{
bool GraphWidget::insideTriangle(int Ax, int Ay, int Bx, int By, int Cx, int Cy, int Px, int Py)
{
}
void GraphWidget::helpImageVisible(bool showit)
{
faceReference->setVisible(showit);
if (showit == true)
scene()->setSceneRect(0,0, 760, 570);
else
scene()->setSceneRect(0,0, 380, 570);
}
#include <QApplication>
#include "mainwindow.h"
// Creation of an application
//Q_INIT_RESOURCE(application);
QApplication app(argc, argv);
#ifndef MAINWINDOW_H
#define MAINWINDOW_H
#include <QMainWindow>
#include <QtGui>
#include "graphwidget.h"
#include "mixer.h"
#include "cameraCapture.h"
#include "welcomewidget.h"
#include "automaticwidget.h"
#include "searchwidget.h"
class QAction;
class QMenu;
class QGraphWidget;
class WelcomeWidget;
class MainWindow : public QMainWindow
{
Q_OBJECT
public:
MainWindow();
public slots:
void welcomeButton();
protected:
void closeEvent(QCloseEvent *event);
private slots:
void open();
bool saveAs();
void about();
void average();
void warpImage();
void showLines(bool);
void nextImage();
void closePicture();
void previousImage();
void cameraCap();
void showHelpFace(bool);
private:
void createGeneralActions();
void createGraphActions();
void createMenus();
void createToolBars();
void createStatusBar();
void readSettings();
void writeSettings();
bool maybeSave();
void createWelcomeImage();
void MoveCenterScreen();
void createBackground();
QPicture *background;
QPainter *painterBack;
QString curFile;
QStringList files;
QStringList::Iterator it;
QMenu *fileMenu;
QMenu *viewMenu;
QMenu *opMenu;
QMenu *actionsMenu;
QMenu *imgsMenu;
QMenu *helpMenu;
// Submenus
QMenu *oneNode;
QMenu *groupNodes;
QMenu *allNodes;
QToolBar *fileToolBar;
QToolBar *editToolBar;
QToolBar *nodeToolBar;
QToolBar *groupToolBar;
QToolBar *allToolBar;
QToolBar *progressToolBar;
QToolBar *viewToolBar;
QWidget *backImage;
QDockWidget *dock;
QAction *openAct;
QAction *saveAsAct;
QAction *closeAct;
QAction *avgAct;
QAction *cameraAct;
QAction *showHelpFaceAct;
QAction *displaceNodeLeftAct;
QAction *displaceNodeRightAct;
QAction *displaceNodeUpAct;
QAction *displaceNodeDownAct;
QAction *displaceGroupLeftAct;
QAction *displaceGroupRightAct;
QAction *displaceGroupUpAct;
QAction *displaceGroupDownAct;
QAction *turnGroupLeftAct;
QAction *turnGroupRightAct;
QAction *increaseGroupDistanceAct;
QAction *decreaseGroupDistanceAct;
QAction *displaceAllLeftAct;
QAction *displaceAllRightAct;
QAction *displaceAllUpAct;
QAction *displaceAllDownAct;
QAction *showLinesAct;
QAction *warpAct;
QAction *nextAct;
QAction *previousAct;
QAction *exitAct;
QAction *aboutAct;
QAction *aboutQtAct;
WelcomeWidget* ww;
QProgressBar *pb;
int numberOfImages;
int contImages;
signals:
void turnGroupLeft();
void turnGroupRight();
void increaseGroupDistance();
void decreaseGroupDistance();
};
#endif
#include "mainwindow.h"
MainWindow::MainWindow()
{
// No file opened
setCurrentFile("");
createWelcomeImage();
contImages = 0;
widget = NULL;
void MainWindow::createGraphActions()
{
}
void MainWindow::createGeneralActions()
{
nextAct = new QAction(QIcon("./images/next.png"), tr("&Continue with the next image of the list"), thi
s);
nextAct->setStatusTip(tr("Change to the next image"));
connect(nextAct, SIGNAL(triggered()), this, SLOT(nextImage()));
void MainWindow::createMenus()
{
fileMenu = menuBar()->addMenu(tr("&File"));
fileMenu->addAction(openAct);
fileMenu->addAction(saveAsAct);
fileMenu->addSeparator();
fileMenu->addAction(closeAct);
fileMenu->addSeparator();
fileMenu->addAction(exitAct);
viewMenu = menuBar()->addMenu(tr("&View"));
viewMenu->addAction(showLinesAct);
viewMenu->addAction(showHelpFaceAct);
opMenu = menuBar()->addMenu(tr("&Mesh..."));
actionsMenu = menuBar()->addMenu("&Operations");
actionsMenu->addAction(avgAct);
actionsMenu->addAction(warpAct);
actionsMenu->addSeparator();
actionsMenu->addAction(cameraAct);
imgsMenu = menuBar()->addMenu(tr("&Images..."));
imgsMenu->addAction(previousAct);
imgsMenu->addAction(nextAct);
helpMenu = menuBar()->addMenu(tr("&Help"));
helpMenu->addAction(aboutAct);
helpMenu->addAction(aboutQtAct);
}
void MainWindow::createToolBars()
{
fileToolBar = addToolBar(tr("File"));
fileToolBar->setIconSize(QSize(40,40));
fileToolBar->addAction(openAct);
fileToolBar->addAction(saveAsAct);
editToolBar = addToolBar(tr("Operations"));
editToolBar->setIconSize(QSize(40,40));
editToolBar->addAction(warpAct);
editToolBar->addAction(avgAct);
editToolBar->addAction(cameraAct);
editToolBar->addSeparator();
editToolBar->addAction(previousAct);
editToolBar->addAction(nextAct);
viewToolBar = addToolBar(tr("Views"));
viewToolBar->setIconSize(QSize(40,40));
viewToolBar->addAction(showLinesAct);
viewToolBar->addAction(showHelpFaceAct);
pb = new QProgressBar(this);
progressToolBar = addToolBar(tr("Progress"));
progressToolBar->addWidget(pb);
nodeToolBar->addAction(displaceNodeLeftAct);
nodeToolBar->addAction(displaceNodeRightAct);
nodeToolBar->addAction(displaceNodeUpAct);
nodeToolBar->addAction(displaceNodeDownAct);
groupToolBar->addAction(displaceGroupLeftAct);
groupToolBar->addAction(displaceGroupRightAct);
groupToolBar->addAction(displaceGroupUpAct);
groupToolBar->addAction(displaceGroupDownAct);
groupToolBar->addAction(turnGroupLeftAct);
groupToolBar->addAction(turnGroupRightAct);
groupToolBar->addAction(increaseGroupDistanceAct);
groupToolBar->addAction(decreaseGroupDistanceAct);
allToolBar->addAction(displaceAllLeftAct);
allToolBar->addAction(displaceAllRightAct);
allToolBar->addAction(displaceAllUpAct);
allToolBar->addAction(displaceAllDownAct);
addToolBar(Qt::LeftToolBarArea,nodeToolBar);
addToolBar(Qt::LeftToolBarArea,groupToolBar);
addToolBar(Qt::LeftToolBarArea,allToolBar);
}
void MainWindow::createStatusBar()
{
statusBar()->showMessage(tr("Ready to warp! :)"));
}
void MainWindow::createWelcomeImage()
{
setWindowTitle(tr("StoneFace v.1 - Select Option"));
resize(QSize(3*128+80, 180));
ww = new WelcomeWidget(this);
setCentralWidget(ww);
void MainWindow::readSettings()
{
QSettings settings("Warping", "Settings");
QPoint pos = settings.value("pos", QPoint(200, 200)).toPoint();
QSize size = settings.value("size", QSize(1000, 1000)).toSize();
resize(size);
move(pos);
}
void MainWindow::writeSettings()
{
QSettings settings("Warping", "Settings");
settings.setValue("pos", pos());
settings.setValue("size", size());
}
void MainWindow::createBackground()
{
QPalette p(palette());
p.setBrush(QPalette::Background, bg);
setPalette(p);
setCentralWidget(backImage);
//repaint();
repaint();
/*
QStringList list = QFileDialog::getOpenFileNames(
0,
"Select one or more images to warp!",
"./",
"Images (*.png *.xpm *.jpg *.bmp *.gif)");
numberOfImages = list.size();
it = list.begin();
it++;
}
*/
}
void MainWindow::welcomeButton()
{
ww->setVisible(false);
switch (ww->itemSelected())
{
case 0:
{
// Design the interface of the main window
createGeneralActions();
createMenus();
createToolBars();
createStatusBar();
// Paint a background image
createBackground();
// Remember last settings (width, height, top, left)
readSettings();
MoveCenterScreen();
setWindowTitle(tr("StoneFace v.1 - Manual Face Creator"));
break;
}
case 1:
{
// Manage the widget position and properties of the window
resize(QSize(3*180, 3*180));
MoveCenterScreen();
createStatusBar();
setWindowTitle(tr("StoneFace v.1 - Automatic Face Creator"));
AutomaticWidget *auw = new AutomaticWidget(this);
setCentralWidget(auw);
break;
}
case 2:
{
setWindowTitle(tr("StoneFace v.1 - Face Search Engine"));
SearchWidget *sw = new SearchWidget(this);
MoveCenterScreen();
setCentralWidget(sw);
MoveCenterScreen();
break;
}
}
}
if (ret == QMessageBox::Yes)
{
writeSettings();
event->accept();
}
else
{
event->ignore();
}
}
void MainWindow::open()
{
pb->setMaximum(numberOfImages);
pb->setMinimum(0);
if (it != files.end())
loadFile(*it);
// Now we let the user play with the image and we wait until 'next' is pressed
//}
*/
}
bool MainWindow::saveAs()
{
if (widget != NULL)
{
QString fileName = QFileDialog::getSaveFileName(this);
if (fileName.isEmpty())
return false;
return saveFile(fileName);
}
return false;
}
void MainWindow::closePicture()
{
if (widget != NULL)
{
QMessageBox msgBox(this);
msgBox.setText("Are you sure you want to remove actual work without saving?");
msgBox.setInformativeText("");
msgBox.setStandardButtons(QMessageBox::Yes | QMessageBox::No);
msgBox.setIconPixmap(QPixmap("./images/Exclamation.png"));
int ret = msgBox.exec();
if (ret == QMessageBox::Yes)
{
delete widget;
contImages = 0;
createBackground();
}
}
}
void MainWindow::about()
{
QMessageBox::about(this, tr("About Application"),
tr("The <b>StoneFace v.1</b> This application was developed to generate averaging faces "
"in order to be used to boost the actual techniques of face recognition.\n"
"<b>University of Glasgow</b> - 2009\n<b>Jorge Garcia Bueno</b>"));
}
if (showOrNot == true)
{
widget->printEdges();
}
else
{
widget->removeEdges();
void MainWindow::previousImage()
{
if (widget != NULL)
{
if (it != files.begin())
{
it--;
contImages--;
QApplication::setOverrideCursor(Qt::WaitCursor);
widget->updatePicture(false);
widget->savePicture(QString (""));
QApplication::restoreOverrideCursor();
QApplication::setOverrideCursor(Qt::WaitCursor);
loadFile(*it);
QApplication::restoreOverrideCursor();
}
}
}
void MainWindow::nextImage()
{
if (widget != NULL)
{
// Advance one position in the list
it++;
contImages++;
loadFile(*it);
QApplication::restoreOverrideCursor();
}
else
{
QMessageBox msgBox(this);
QAbstractButton *openMoreButton = msgBox.addButton(tr("Create average..."), QMessageBox::Actio
nRole);
QAbstractButton *CancelButton = msgBox.addButton(tr("No, thanks"), QMessageBox::ActionRole);
msgBox.setText("Congratulations! You have reached the end of the list of pictures!");
msgBox.setIcon(QMessageBox::Information);
msgBox.exec();
if (msgBox.clickedButton() == openMoreButton)
{
average();
}
if (msgBox.clickedButton() == CancelButton)
{
delete widget;
// exit?¿
}
}
}
}
bool MainWindow::maybeSave()
{
/*if (textEdit->document()->isModified()) {
QMessageBox::StandardButton ret;
ret = QMessageBox::warning(this, tr("Application"),
tr("The picture has been modified.\n"
"Do you want to save your changes?"),
QMessageBox::Save | QMessageBox::Discard | QMessageBox::Cancel);
if (ret == QMessageBox::Save)
return save();
else if (ret == QMessageBox::Cancel)
return false;
}*/
return true;
}
void MainWindow::cameraCap()
{
cameraCapture *camera = new cameraCapture();
camera->captureWindow();
}
if (!file.open(QFile::ReadOnly | QFile::Text))
{
QMessageBox::warning(this, tr("Warp Faces"),
tr("Cannot read file %1:\n%2.")
.arg(fileName)
.arg(file.errorString()));
return;
}
QApplication::setOverrideCursor(Qt::WaitCursor);
// Update the progressbar
pb->setValue(contImages);
setCentralWidget(widget);
createGraphActions();
// Before loading the picture check if there is an old mesh and ask
// if the user want to load it or not if found
QString fileNameMesh;
fileNameMesh.insert(0,fileName.mid(0,fileName.length()-4));
fileNameMesh = fileNameMesh.insert(fileName.length()-4,"-warped.dat");
QFile fileMesh(fileNameMesh);
if (fileMesh.open(QFile::ReadOnly | QFile::Text))
{
// There is a previous mesh
QMessageBox msgBox(this);
msgBox.setText(tr("There is a previous mesh for this image, do you want to load it?"));
msgBox.setInformativeText(tr("Previous mesh found!"));
msgBox.setIcon(QMessageBox::Information);
msgBox.setStandardButtons(QMessageBox::Yes | QMessageBox::No);
if (ret == QMessageBox::Yes)
fileNameMesh = fileNameMesh;
else
fileNameMesh = QString("pos.dat");
}
else
fileNameMesh = QString("pos.dat");
widget->loadPicture(fileName,fileNameMesh.toLatin1().data(),(char*)"pos.dat",false);
QApplication::restoreOverrideCursor();
setCurrentFile(fileName);
statusBar()->showMessage(tr("File loaded: ").append(fileName), 2000);
//QTextStream out(&file);
QApplication::setOverrideCursor(Qt::WaitCursor);
widget->savePicture(fileName);
QApplication::restoreOverrideCursor();
setCurrentFile(fileName);
statusBar()->showMessage(tr("File saved:").append(fileName), 2000);
return true;
}
void MainWindow::warpImage()
{
if (widget != NULL)
widget->updatePicture(true);
}
void MainWindow::average()
{
QString shownName;
if (curFile.isEmpty())
shownName = "";
else
shownName = strippedName(curFile);
move((screenWidth-width)/2,(screenHeight - height)/2);
}
#ifndef MIXER_H
#define MIXER_H
// Include header files
#include "graphwidget.h"
#include <QtGui/QGraphicsView>
#include <QFileDialog>
#include <QMessageBox>
class QAction;
class QMenu;
class QGraphWidget;
class Mixer
{
private:
GraphWidget *widget;
public:
// Constructor of the object
Mixer(GraphWidget *);
void autoDetectAverage();
int simpleAverage(bool);
int simpleAverage2(bool);
void equalizeHistogram(uchar *, int, int);
void HighPassFilter(uchar *pixels,int width, int height);
};
#endif
#include "mixer.h"
Mixer::Mixer(GraphWidget *mainw)
{
widget = mainw;
void Mixer::autoDetectAverage()
{
QStringList files = QFileDialog::getOpenFileNames(
0,
"Select one or more files to open",
"./",
"Images (*.png *.xpm *.jpg *.bmp)");
// Start iterations
while( it != list.end() )
{
widget->updatePicture(true);
widget->savePicture(QString (""));
widget->downloadAddMesh(&avgMesh);
//printf("value element 10 avgMesh %f %f\n",avgMesh.value(10),avgMesh.value(11));
for (int i=0;i<570;i++)
{
for (int j=0;j<380*3;j=j+3)
{
co = new QColor(widget->currentImage().pixel(k++,i));
co->getRgb(&r,&g,&b);
avgPicArray[j+0][i] += r;
avgPicArray[j+1][i] += g;
avgPicArray[j+2][i] += b;
}k=0;
}
++it;
}
else
{
nTotal--;
printf("Picture rejected!\n");
++it;
}
k=0;
co = new QColor(avgPicArray[j][i],avgPicArray[j+1][i],avgPicArray[j+2][i]);
avgPic->setPixel(k++,i,co->rgb());
}k=0;
}
// Update the average array with the new values
for (int i = 0;i < 164;i++)avgMesh.replace(i,avgMesh.value(i) / list.size()-nRejected);
if (!f) {
fprintf(stderr,"Error saving the mesh file!\n");
exit(1);
}
fprintf(f,"86\n");
widget->loadPicture("average.jpg","pos.dat","avg.dat",false);
widget->updatePicture();
widget->savePicture("morphedAverage.jpg");
*/
// Save the average picture
avgPic->save("average.bmp");
QMessageBox msgBox;
msgBox.setText("Image created and saved succesfully: average.bmp\n");
msgBox.exec();
exit(0);
QStringList::Iterator it = files.begin();
QString s;
QImage resImg ;
QImage avgPic(380,570,QImage::Format_RGB32);
int r,g,b;
QColor co;
int k = 0, z;
int nTotal = files.size();
while( it != files.end() )
{
// Edit the name of the mesh file associated with the image file
fileNameMesh = QString(*it);
fileNameMesh.chop(3);
fileNameMesh.append(QString("dat"));
s = *it;
fileNameMesh = QString(s);
fileNameMesh.chop(3);
fileNameMesh.append(QString("dat"));
fclose(f);
co.getRgb(&r,&g,&b);
avgPicArray[j+0][i] += r;
avgPicArray[j+1][i] += g;
avgPicArray[j+2][i] += b;
}k=0;
}
it++;
}
k = 0;
co = QColor(avgPicArray[j][i],avgPicArray[j+1][i],avgPicArray[j+2][i]);
avgPic.setPixel(k++,i,co.rgb());
}k=0;
}
if (!ff) {
fprintf(stderr,"Error saving the mesh file!\n");
exit(1);
}
//Print the number of nodes first
fprintf(ff,"86\n");
fclose(ff);
widget->loadPicture("/tmp/average.jpg",(char*)"pos.dat",(char*)"avg.dat",false);
widget->updatePicture(false);
//widget->savePicture(avgee);
avgPic = QImage(widget->currentImage());
resImg = avgPic.scaled(190,285,Qt::IgnoreAspectRatio,Qt::SmoothTransformation);
//if (equalization == true) equalizeHistogram(avgPic.bits(),190,285);
resImg.save(avgee);
return (1);
// First of all obtain the max and min values of all the range of grays in the picture
uchar min_r=255,min_g=255,min_b=255;
uchar max_r=0,max_g=0,max_b=0;
uchar R,G,B;
for (int i = 0;i < 3 * width * height;i=i+3)
{
R = *(pixels+i+0);
G = *(pixels+i+1);
B = *(pixels+i+2);
* (pixFiltered + k + 0) = newRed;
* (pixFiltered + k + 1) = newGreen;
* (pixFiltered + k + 2) = newBlue;
k++;
}
QStringList::Iterator it = files.begin();
QString s;
QImage resImg ;
QImage avgPic(380,570,QImage::Format_RGB32);
int r,g,b;
QColor co;
int k = 0, z;
int nTotal = files.size();
while( it != files.end() )
{
// We will not reject any picture because there is not pre-allocating
s = *it;
avgPic.load(s);
avgPic = avgPic.scaled(380,570,Qt::IgnoreAspectRatio,Qt::SmoothTransformation);
// Store the values into avgPicArray
for (int i=0;i<570;i++)
{
for (int j=0;j<380*3;j=j+3)
{
co = QColor(avgPic.pixel(k++,i));
co.getRgb(&r,&g,&b);
avgPicArray[j+0][i] += r;
avgPicArray[j+1][i] += g;
avgPicArray[j+2][i] += b;
}k=0;
}
it++;
}
avgPicArray[j+0][i] = avgPicArray[j+0][i]/nTotal;
avgPicArray[j+1][i] = avgPicArray[j+1][i]/nTotal;
avgPicArray[j+2][i] = avgPicArray[j+2][i]/nTotal;
}k=0;
}
k = 0;
co = QColor(avgPicArray[j][i],avgPicArray[j+1][i],avgPicArray[j+2][i]);
avgPic.setPixel(k++,i,co.rgb());
}k=0;
}
return (1);
#ifndef MIXERAUT_H
#define MIXERAUT_H
// Include header files
#include <QtGui>
#include <QPixmap>
#include <QGraphicsView>
#include <QListView>
#include <QDialog>
#include <QPushButton>
#include <QDesktopWidget>
#include "stonefacewidget.h"
#include "automaticwidget.h"
class QAction;
class QMenu;
class QGraphWidget;
class WelcomeWidget;
class AutomaticWidget;
class StoneFaceWidget;
class MixerAutomatic : public QWidget
{
Q_OBJECT
public:
// Constructor of the object
MixerAutomatic(StoneFaceWidget *, AutomaticWidget *);
int Start(QString,int,int,bool);
int setFilesList(QStringList);
private:
AutomaticWidget *qw;
StoneFaceWidget *widget;
QStringList *files;
signals:
void SetStatus(QString,int,int,int,int,int,int,int, QImage, QImage);
public slots:
int SearchFiles();
private slots:
};
#endif
#include "mixeraut.h"
int MixerAutomatic::setFilesList(QStringList l)
{
files = &l;
}
int MixerAutomatic::SearchFiles()
{
files = new QStringList(QFileDialog::getOpenFileNames(
0,
"Select one or more files to open",
"./",
"Images (*.png *.xpm *.jpg *.bmp *.gif)"));
return files->size();
}
//Necesary to check that all the images have the same size ...
int hlp=0,totalOrig=files->size();
// Start iterations
while( it != files->end() )
{
counter++;
// if (grayscale == true)
// widget = new StoneFaceWidget();
// else
widget = new StoneFaceWidget();
// We will reject the picture if any of the elements of the face is found
//fprintf(stderr, "\nTarget: %s\n", (char*)s.toLatin1().data());
if (widget->loadPicture(*it,(char*)"tinyposaut.dat",(char*)"tinyposaut2.dat",(char*)"tinytriangles
aut.dat",true,true) == 0 && widget->updatePicture() == 0)
s = *it;
hlp++;
co->getRgb(&r,&g,&b);
avgPicArray[j][i] += r;
avgPicArray[j+1][i] += g;
avgPicArray[j+2][i] += b;
}k=0;
}
s.append("-morphed.jpg");
//widget->savePicture(s);
printf("State: Accepted!\n");
++it;
}
else
{
s = *it;
nTotal--;
printf("State: Rejected!\n");
emit SetStatus(*it,counter,totalOrig,widget->currentImage().depth(),
widget->currentImage().numColors(), widget->currentImage().numBytes(),
widget->currentImage().width(),widget->currentImage().height(),
widget->currentImage(),widget->currentImage());
//delete widget;
++it;
}
PBSetValue((int) (counter/nTotal));
}
k=0;
co = new QColor(avgPicArray[j][i],avgPicArray[j+1][i],avgPicArray[j+2][i]);
avgPic->setPixel(k++,i,co->rgb());
}k=0;
}
if (!f) {
fprintf(stderr,"Error saving the mesh file!\n");
exit(1);
}
//Print the number of nodes first
fprintf(f,"12\n");
fclose(f);
if (grayscale== true)
{
int grayColor = 0;int rx,gx,bx;
for (int i=0;i<180;i++)
{
for (int j=0;j<180;j++)
{
co = new QColor(avgPic->pixel(j,i));
co->getRgb(&rx,&gx,&bx);
grayColor = (rx + gx + bx )/3;
co->setRgb(grayColor,grayColor,grayColor);
avgPic->setPixel(i,j,co->rgb());
}k=0;
}
}
avgPic = new QImage(avgPic->scaled(190,285,Qt::IgnoreAspectRatio,Qt::SmoothTransformation) );
avgPic->save(outputname);
return (1);
}
#ifndef NODE_H
#define NODE_H
#include <QGraphicsItem>
#include <QList>
class GraphWidget;
class QGraphicsSceneMouseEvent;
class Node : public QGraphicsItem
{
public:
Node(GraphWidget *graphWidget, int nodeNumber, char color, bool helpNode);
qreal posX();
qreal posY();
void calculateForces();
bool advance();
private:
QPointF newPos;
GraphWidget *graph;
int nNumber;
char color;
};
#endif
#include <QGraphicsScene>
#include <QGraphicsSceneMouseEvent>
#include <QPainter>
#include <QStyleOption>
#include "node.h"
#include "graphwidget.h"
void Node::calculateForces()
{
if (helpNode==false) graph->nodeMoved(nNumber,newPos.x(),newPos.y());
if (helpNode==false) newPos = pos();
bool Node::advance()
{
if (newPos == pos())
return false;
if (helpNode==false) setPos(newPos);
return true;
}
char Node::getColor()
{
return color;
}
void Node::setColor(char _color)
{
color = _color;
}
QRectF Node::boundingRect() const
{
qreal adjust = 2;
return QRectF(-10 - adjust, -10 - adjust,23 + adjust, 23 + adjust);
return path;
}
if (helpNode==false)graph->itemMoved();
break;
default:
break;
};
update();
QGraphicsItem::mouseReleaseEvent(event);
}
#ifndef NODEAUTOM_H
#define NODEAUTOM_H
#include <QGraphicsItem>
#include <QList>
#include "stonefacewidget.h"
class StoneFaceWidget;
class QGraphicsSceneMouseEvent;
qreal posX();
qreal posY();
void calculateForces();
bool advance();
protected:
QVariant itemChange(GraphicsItemChange change, const QVariant &value);
QPointF newPos;
StoneFaceWidget *graph;
int nNumber;
char color;
};
#endif
#include <QGraphicsScene>
#include <QGraphicsSceneMouseEvent>
#include <QPainter>
#include <QStyleOption>
#include "nodeaut.h"
void NodeAutomatic::calculateForces()
{
//if (!scene() || scene()->mouseGrabberItem() == this) { //-------Currently just returns
new position as whatever position its at.
graph->nodeMoved(nNumber,newPos.x(),newPos.y());
newPos = pos();
//}
bool NodeAutomatic::advance()
{
if (newPos == pos())
return false;
setPos(newPos);
return true;
}
char NodeAutomatic::getColor()
{
return color;
}
return path;
}
break;
default:
break;
};
update();
QGraphicsItem::mouseReleaseEvent(event);
}
#include <QtGui>
#include <QPixmap>
#include <QGraphicsView>
#include <QListView>
#include <QDialog>
#include <QPushButton>
#include <QDesktopWidget>
#include "mainwindow.h"
#include "eigenfaces.h"
class EigenFaces;
private:
// MainWindow handle
QMainWindow *mainWin;
void StartConnections();
// Eigenfaces objects
EigenFaces *eigen;
// Inteface objects
QPushButton *buttonSelectDB;
QPushButton *buttonSelectFace;
QPushButton *buttonStartSearch;
QLabel *infoDB;
QLabel *infoFace;
QLabel *instructionsLbl;
QLabel *instructionsLbl2;
QLabel *mixInfo;
QLabel *face1;
QProgressBar *pb;
QString *pathImageSelected;
QGridLayout *lLayout;
void StartInterface();
void MoveCenterScreen();
public:
SearchWidget(QMainWindow *parent);
private slots:
void SelectDB();
void SelectFace();
void StartSearch();
public slots:
void GetInfoDB(QString , int , int , int );
void GetNearest(QString);
void GetProgress(int);
void InfoNearest(QString);
};
#endif
#include "searchwidget.h"
SearchWidget::SearchWidget(QMainWindow *parent)
{
// Remember the handle of the parent for the returning function
mainWin = parent;
void SearchWidget::MoveCenterScreen()
{
move((screenWidth-width)/2,(screenHeight - height)/2);
}
void SearchWidget::StartConnections()
{
connect(buttonSelectDB, SIGNAL(clicked()), this, SLOT( SelectDB() ) );
connect(buttonSelectFace, SIGNAL(clicked()), this, SLOT(SelectFace()));
connect(buttonStartSearch, SIGNAL(clicked()), this, SLOT(StartSearch()));
}
void SearchWidget::SelectDB()
{
if (fileName.isEmpty() == false)
{
eigen = new EigenFaces(this);
connect (eigen, SIGNAL(InfoDB(QString , int , int , int )),
this, SLOT(GetInfoDB(QString , int , int , int )));
eigen->SetDB(fileName.toLatin1().data());
}
}
void SearchWidget::SelectFace()
{
QString fileName = QFileDialog::getOpenFileName(this, tr("Select the face to recognise"),
"./", tr("Images (*.png *.xpm *.jpg *.bmp)"));
if (fileName.isEmpty() == false)
{
pathImageSelected = new QString(fileName);
QImage img(fileName);
QString mixInfoL("");
mixInfoL += "Information:\nPath: ..." + pathImageSelected->right(20) + "\n";
mixInfoL += "Depth: " + QString::number(img.depth()) + "\n";
mixInfoL += "Number of Colors: " + QString::number(img.numColors()) + " colors\n";
mixInfoL += "Number of bytes: " + QString::number(img.numBytes()) + " bytes\n";
mixInfoL += "Size: " + QString::number(img.width()) + " x " + QString::number(img.height() );
mixInfo->setText(mixInfoL);
}
}
void SearchWidget::StartSearch()
{
if (pathImageSelected->isEmpty() == false)
{
eigen->Recognise(pathImageSelected->toLatin1().data(), true);
}
}
void SearchWidget::StartInterface()
{
instructionsLbl = new QLabel("Step #1 - Please, select the face database file you want to use", this);
instructionsLbl2 = new QLabel("Step #2 - Select the face you need to recognise", this);
QString infoDBL("");
infoDBL += "Information of the database:\n";
infoDBL += "Date: \n";
infoDBL += "Number of faces: \n";
infoDBL += "Size of faces: \n";
QString infoFaceL("");
infoFaceL += "Information about the closest face:\nPath: ...\n";
infoFaceL += "Name: \n";
infoFaceL += "Age: \n";
infoFaceL += "Gender: \n";
infoFaceL += "Nacionality: \n";
infoFaceL += "Proffession:";
infoFace = new QLabel(infoFaceL);
QString mixInfoL("");
mixInfoL += "Information:\nPath: ...\n";
mixInfoL += "Depth: \n";
mixInfoL += "Number of Colors: \n";
mixInfoL += "Number of bytes: \n";
mixInfoL += "Size: ";
mixInfo = new QLabel(mixInfoL);
pb = new QProgressBar(this);
pb->setValue(0);
// Creation of layout to define the positions of the objects
lLayout = new QGridLayout;
lLayout->addWidget(instructionsLbl,0,0,1,2);
lLayout->addWidget(buttonSelectDB,1,0,1,1);
lLayout->addWidget(infoDB,1,1,1,1);
// ------------------------------------------
lLayout->addWidget(instructionsLbl2,3,0,1,2);
lLayout->addWidget(buttonSelectFace,4,0,2,1);
lLayout->addWidget(mixInfo,4,1,2,1);
// ------------------------------------------
lLayout->addWidget(infoFace,6,0,1,1);
lLayout->addWidget(face1,6,1,2,1);
lLayout->addWidget(buttonStartSearch,7,0,1,1);
// ------------------------------------------
lLayout->addWidget(pb,8,0,1,2);
setLayout(lLayout);
}
QString infoDBL("");
infoDBL += "Information of the database:\n";
infoDBL += "Date: " + dateS + "\n";
infoDBL += "Number of faces: " + QString::number(numFaces) + "\n";
infoDBL += "Size of faces: "+ QString::number(widthFaces) + " x " + QString::number(heightFaces);
infoDB->setText(infoDBL);
}
void SearchWidget::GetNearest(QString pathNearest)
{
printf("path nearest: %s\n",pathNearest.toLatin1().data());
face1->setPixmap(QPixmap(QPixmap(pathNearest).scaled(120,180)));
}
void SearchWidget::InfoNearest(QString infoSubject)
{
infoFace->setText(infoSubject);
void SearchWidget::GetProgress(int p)
{
pb->setValue(p);
}
#ifndef STONEWIDGET_H
#define STONEWIDGET_H
#include <QtGui/QGraphicsView>
#include "nodeaut.h"
#include "detector.h"
class NodeAutomatic;
void itemMoved();
QImage currentImage();
void nodeMoved(int nodeNumber, qreal newX,qreal newY);
void savePicture(const QString &fileName);
int loadPicture(const QString &fileName, char* origMesh, char* destMesh,char* triang,bool tiny, bool a
utodetection);
int updatePicture();
void downloadAddMesh(QList<qreal>*);
protected:
int aNodes[38][2];
int aNodes2[38][2];
int aTri[70][3];
int nTri;
bool inside, flag;
double A[3][3];
double in[3][3], alpha, beta, gamma;
int Apx,Apy,Bpx,Bpy,Cpx,Cpy,Xp,Yp;
QRgb valor;
double detInv;//n is the determinant of A
private:
QString *widgetFileName;
QPixmap *fa1,*fa2;
long lineLength(int A1, int B1, int A2, int B2);
double areaOfTriangle(int Ax, int Ay, int Bx, int By, int Cx, int Cy);
int detectElements(bool stretch);
int detectElementsTiny(bool strech);
int timerId;
bool insideTriangle(int Ax, int Ay, int Bx, int By, int Cx, int Cy, int Px, int Py);
QGraphicsPixmapItem *face1,*face2;
QImage *imf1,*imf2;
QGraphicsScene *superScene;
signals:
void zoneScanned(QString);
};
#endif
#include "stonefacewidget.h"
#include <QDebug>
#include <QGraphicsScene>
#include <QtGui>
#include <QPixmap>
#include <QGraphicsView>
#include <math.h>
#include <QPushButton>
StoneFaceWidget::StoneFaceWidget()
: timerId(0)
{
void StoneFaceWidget::itemMoved()
{
if (!timerId)
timerId = startTimer(1 / 25);
}
aNodes[nodeNumber][0] = (int)newX;
aNodes[nodeNumber][1] = (int)newY;
if (!itemsMoved) {
killTimer(timerId);
timerId = 0;
}
}
int StoneFaceWidget::loadPicture(const QString &fileName, char* origMesh, char* destMesh, char* triang,boo
l tiny, bool autodetection)
{
//--get background image-----------------------------------------------
printf("loading...\n");
widgetFileName = new QString(fileName.toLatin1().data());
fa1 = new QPixmap(fileName);
fa2 = new QPixmap(fileName);
superScene->addItem(face1);
//superScene->addItem(face2);
face1->setPos(0,0);
//face2->setPos(240,0);
// by default we position all the nodes in the final destination. After the recognition the positions wil
l change
FILE* f=fopen(origMesh,"r");
if (!f) {
fprintf(stderr,"Original mesh: %s not there!\n",origMesh);
exit(1);
}
int nNodes, x, y, z;
char color;
z = fscanf(f,"%d\n",&nNodes);
int nn = nNodes;
node[i]->calculateForces();
}
fclose(f);
int nNodes2;
char c;
z = fscanf(f3,"%d\n",&nNodes2);
fclose(f3);
/*
*/
// Read the points to know the triangles done
FILE* f4=fopen(triang,"r");
if (!f4){
fprintf(stderr,"Triangles data: %s not there!\n", triang);
exit(-1);
}
z = fscanf(f4,"%d\n",&nTri);
fclose(f4);
if (autodetection == true)
{
if (tiny == true)
{
if (detectElementsTiny(false) == 0)
{
}
else
{
return 1;
}
}
else
{
if (detectElements(false) == 0)
{
}
else
{
return 1;
}
}
}
return 0;
}
void StoneFaceWidget::savePicture(const QString &fileName)
{
imf2->save(fileName);
}
printf("list size: %d\n",list->size() );
// the lenght of this list must be the same as 2*the number of nodes in the mesh
if (list->size() == 24)
{
list->replace(i,list->value(i) + node->posX());
list->replace(i+1,list->value(i+1) + node->posY());
printf("nodeX %f , nodeY %f\n", node->posX(), node->posY());
i=i+2;
}
}
}
int StoneFaceWidget::updatePicture()
{
flag = true;
}
}
if (flag == false) {printf("Point out of matrix ?? point X(%d,%d)\n",i, j);return(-1);}
flag = false;
}
}
//Now
Apx = aNodes [aTri[ mTriangles[i][j] ][0]][0]; //Point Ap of the triangle (original triangl
e)
Apy = aNodes [aTri[ mTriangles[i][j] ][0]][1];
Bpx = aNodes [aTri[ mTriangles[i][j] ][1]][0]; //Point Bp of the triangle (original triangl
e)
Bpy = aNodes [aTri[ mTriangles[i][j] ][1]][1];
Cpx = aNodes [aTri[ mTriangles[i][j] ][2]][0]; //Point Cp of the triangle (original triangl
e)
Cpy = aNodes [aTri[ mTriangles[i][j] ][2]][1];
valor = imf1->pixel(Xp,Yp);
QColor color(valor);
imf2->setPixel(i,j,color.rgb());
//superScene->update(0,0,superScene->width(),superScene->height());
QPixmap *k = new QPixmap(imf1->width(),imf1->height());
face1->setPixmap(k->fromImage(*imf2));
return (0);
}
int midYeyeL, midYeyeR; // Contains the medium height both eyes to determine the ears
int faceX,faceY,faceW,faceH;
int eyesX,eyesY,eyesW,eyesH;
int mouthY,minYeye,maxXeyeL;
int minXeyeR,avgXeyeR;
int nodei = 0; // for list moving
int extremeEyeR,extremeEyeL,avgXeyeL;
// Creation of a list with the nodes to be moved
// ####################################################################################
// -------------------- First: Face recognition --------------------------------------
// ####################################################################################
// Creation of the detector and obtaining of the coordinates for the face
dFace = new Detector(ImageFace);
dFace->DetectAndDraw('F',&px,&py,&imHeight,&imWidth);
// Now we just want the face scaled with the coordinates changed and work with result
// ####################################################################################
// -------- Second: Both eyes recognition -------------------
// ####################################################################################
printf("eyes...\n");
// Copy the information between both image formats. Only the face region is needed to locate the eye s
quares
h=faceX;
for (int i = faceY ; i < faceY + faceH*3/4;i++)
{
for (int j = 3* faceX; j < (faceX + faceW)*3; j+=3)
{
((char*)subImageEyes->imageData)[i*result.width()*3 + j + 0] = qBlue(result.pixel(h,
i)); // blue channel
((char*)subImageEyes->imageData)[i*result.width()*3 + j + 1] = qGreen(result.pixel(h,i)); //
green channel
((char*)subImageEyes->imageData)[i*result.width()*3 + j + 2] = qRed(result.pixel(h,i)); // r
ed channel
h++;
}h=faceX;
}
// Fill the eyes boundary variables. Throught the face boundaries the eyes will be provided
eyesX = px; eyesY = py; eyesW = imWidth; eyesH = imHeight;
// ####################################################################################
// -------- Third: Left eye recognition inside left half eyes region ------------------
// ####################################################################################
// Copy the information between both image formats. Only the face region is needed to locate the eye s
quares
h=eyesX;
for (int i = eyesY ; i < eyesY + eyesH;i++)
{
for (int j = 3* eyesX; j < (eyesX + eyesW/2)*3; j+=3)
{
((char*)subImageEyeL->imageData)[i*result.width()*3 + j + 0] = qBlue(result.pixel(h,
i)); // blue channel
((char*)subImageEyeL->imageData)[i*result.width()*3 + j + 1] = qGreen(result.pixel(h,i)); //
green channel
((char*)subImageEyeL->imageData)[i*result.width()*3 + j + 2] = qRed(result.pixel(h,i)); // r
ed channel
h++;
}h=eyesX;
}
// ####################################################################################
// -------- Third: Right eye recognition inside left upper quarter -------------------
// ####################################################################################
// Copy the information between both image formats. Only the face region is needed to locate the eye s
quares
h = eyesX + eyesW/2;
for (int i = eyesY ; i < eyesY + eyesH;i++)
{
for (int j = 3*(eyesX + eyesW/2); j < (eyesX + eyesW)*3; j+=3)
{
((char*)subImageEyeR->imageData)[i*result.width()*3 + j + 0] = qBlue(result.pixel(h,
i)); // blue channel
((char*)subImageEyeR->imageData)[i*result.width()*3 + j + 1] = qGreen(result.pixel(h,
i)); // green channel
((char*)subImageEyeR->imageData)[i*result.width()*3 + j + 2] = qRed(result.pixel(h,i)); //
red channel
h++;
}h=eyesX + eyesW/2;
}
// cvSaveImage("right.jpg",subImageEyeR);
dEyeR = new Detector(subImageEyeR);
dEyeR->DetectAndDraw('R',&px,&py,&imHeight,&imWidth);
// Find the minimum height position between both eyes for the nose allocation
if (py + imHeight > minYeye) minYeye = py + imHeight;
// Boundary
minXeyeR = px + 0.25 * imWidth;
// Move the nodes where we think they should be... important, an offset of 0.25*size is added
nodei = 0;
foreach (NodeAutomatic *node, nodes)
{
}
// ####################################################################################
// -------- Fourth: Mouth recognition in the center side -------------------
// ####################################################################################
// Copy the information between both image formats. Only the face region is needed to locate the eye s
quares
h=faceX;
for (int i = faceY + faceH*3/4 ; i < faceY + faceH;i++)
{
for (int j = 3*faceX; j < (faceX + faceW)*3; j+=3)
{
((char*)subImageMouth->imageData)[i*result.width()*3 + j + 0] = qBlue(result.pixel(h,
i)); // blue channel
((char*)subImageMouth->imageData)[i*result.width()*3 + j + 1] = qGreen(result.pixel(h,
i)); // green channel
((char*)subImageMouth->imageData)[i*result.width()*3 + j + 2] = qRed(result.pixel(h,
i)); // red channel
h++;
}h=faceX;
}
//cvSaveImage("mouth.jpg",subImageMouth);
dMouth = new Detector(subImageMouth);
dMouth->DetectAndDraw('M',&px,&py,&imHeight,&imWidth);
mouthY = py;
// Move the nodes where we think they should be... important, an offset of 0.15*size is added
nodei = 0;
nodei++;
}
// ####################################################################################
// -------- Fifth: Nose recognition inside the center side -------------------
// ####################################################################################
((char*)subImageNose->imageData)[i*result.width()*3 + j + 0] = qBlue(result.pixel(h,
i)); // blue channel
((char*)subImageNose->imageData)[i*result.width()*3 + j + 1] = qGreen(result.pixel(h,
i)); // green channel
((char*)subImageNose->imageData)[i*result.width()*3 + j + 2] = qRed(result.pixel(h,i)); //
red channel
h++;
}h=faceX;
}
//cvSaveImage("nose.jpg",subImageNose);
dNose = new Detector(subImageNose);
dNose->DetectAndDraw('N',&px,&py,&imHeight,&imWidth);
// Move the nodes where we think they should be... important, an offset of 0.10*size is added
nodei = 0;
if ((px != 0) and (py != 0))
foreach (NodeAutomatic *node, nodes)
{
if (nodei == 19) node->setPos(px + 0.10 * imWidth, py + imHeight/2);
if (nodei == 20) node->setPos(px + 0.90 * imWidth, py + imHeight/2);
if (nodei == 21) node->setPos(px + imWidth/2, py + 0.10 * imHeight);
if (nodei == 22) node->setPos(px + imWidth/2, py + 0.90 * imHeight);
node->calculateForces();
//if (nodei == 6) node->setPos((faceX+faceW),midYeyeR );
nodei++;
return 0;
double StoneFaceWidget::areaOfTriangle(int Ax, int Ay, int Bx, int By, int Cx, int Cy)
{
bool StoneFaceWidget::insideTriangle(int Ax, int Ay, int Bx, int By, int Cx, int Cy, int Px, int Py)
{
int midYeyeL, midYeyeR; // Contains the medium height both eyes to determine the ears
int faceX,faceY,faceW,faceH;
int eyesX,eyesY,eyesW,eyesH;
int mouthY,minYeye,maxXeyeL;
int minXeyeR,avgXeyeR;
int nodei = 0; // for list moving
int extremeEyeR,extremeEyeL,avgXeyeL;
// Creation of a list with the nodes to be moved
// ####################################################################################
// -------------------- First: Face recognition --------------------------------------
// ####################################################################################
// Creation of the detector and obtaining of the coordinates for the face
dFace = new Detector(ImageFace);
dFace->DetectAndDraw('F',&px,&py,&imHeight,&imWidth);
// Now we just want the face scaled with the coordinates changed and work with result
// Fill the face boundary variables. Throught the face boundaries the rest of the elements will be pro
vided
faceX = px; faceY = py; faceW = imWidth; faceH = imHeight;
// ####################################################################################
// -------- Second: Both eyes recognition -------------------
// ####################################################################################
//cvSaveImage("eyes.jpg",subImageEyes);
subImageEyes = cvCreateImage(cvSize(result.width(), result.height()), IPL_DEPTH_8U, 3);
// Copy the information between both image formats. Only the face region is needed to locate the eye s
quares
h=faceX;
for (int i = faceY ; i < faceY + faceH*3/4;i++)
{
for (int j = 3* faceX; j < (faceX + faceW)*3; j+=3)
{
((char*)subImageEyes->imageData)[i*result.width()*3 + j + 0] = qBlue(result.pixel(h,
i)); // blue channel
((char*)subImageEyes->imageData)[i*result.width()*3 + j + 1] = qGreen(result.pixel(h,i)); //
green channel
((char*)subImageEyes->imageData)[i*result.width()*3 + j + 2] = qRed(result.pixel(h,i)); // r
ed channel
h++;
}h=faceX;
}
// Fill the eyes boundary variables. Throught the face boundaries the eyes will be provided
eyesX = px; eyesY = py; eyesW = imWidth; eyesH = imHeight;
// ####################################################################################
// -------- Third: Left eye recognition inside left half eyes region ------------------
// ####################################################################################
// Copy the information between both image formats. Only the face region is needed to locate the eye s
quares
h=eyesX;
for (int i = eyesY ; i < eyesY + eyesH;i++)
{
for (int j = 3* eyesX; j < (eyesX + eyesW/2)*3; j+=3)
{
((char*)subImageEyeL->imageData)[i*result.width()*3 + j + 0] = qBlue(result.pixel(h,
i)); // blue channel
((char*)subImageEyeL->imageData)[i*result.width()*3 + j + 1] = qGreen(result.pixel(h,i)); //
green channel
((char*)subImageEyeL->imageData)[i*result.width()*3 + j + 2] = qRed(result.pixel(h,i)); // r
ed channel
h++;
}h=eyesX;
}
minYeye = py + imHeight;
// Copy the information between both image formats. Only the face region is needed to locate the eye s
quares
h = eyesX + eyesW/2;
for (int i = eyesY ; i < eyesY + eyesH;i++)
{
for (int j = 3*(eyesX + eyesW/2); j < (eyesX + eyesW)*3; j+=3)
{
((char*)subImageEyeR->imageData)[i*result.width()*3 + j + 0] = qBlue(result.pixel(h,
i)); // blue channel
((char*)subImageEyeR->imageData)[i*result.width()*3 + j + 1] = qGreen(result.pixel(h,
i)); // green channel
((char*)subImageEyeR->imageData)[i*result.width()*3 + j + 2] = qRed(result.pixel(h,i)); //
red channel
h++;
}h=eyesX + eyesW/2;
}
// cvSaveImage("right.jpg",subImageEyeR);
dEyeR = new Detector(subImageEyeR);
dEyeR->DetectAndDraw('R',&px,&py,&imHeight,&imWidth);
// ####################################################################################
// -------- Fourth: Mouth recognition in the center side -------------------
// ####################################################################################
// Copy the information between both image formats. Only the face region is needed to locate the eye s
quares
h=faceX;
for (int i = faceY + faceH*3/4 ; i < faceY + faceH;i++)
{
for (int j = 3*faceX; j < (faceX + faceW)*3; j+=3)
{
((char*)subImageMouth->imageData)[i*result.width()*3 + j + 0] = qBlue(result.pixel(h,
i)); // blue channel
((char*)subImageMouth->imageData)[i*result.width()*3 + j + 1] = qGreen(result.pixel(h,
i)); // green channel
((char*)subImageMouth->imageData)[i*result.width()*3 + j + 2] = qRed(result.pixel(h,
i)); // red channel
h++;
}h=faceX;
}
//cvSaveImage("mouth.jpg",subImageMouth);
dMouth = new Detector(subImageMouth);
dMouth->DetectAndDraw('M',&px,&py,&imHeight,&imWidth);
// ####################################################################################
// -------- Fifth: Nose recognition inside the center side -------------------
// ####################################################################################
// Copy the information between both image formats. Only the face region is needed to locate the eye s
quares
h=faceX;
if (mouthY == 0 ) mouthY = faceY + faceH;
for (int i = minYeye - eyesH/2 ; i < mouthY;i++)
{
for (int j = 3*faceX; j < (faceX + faceW)*3; j+=3)
{
((char*)subImageNose->imageData)[i*result.width()*3 + j + 0] = qBlue(result.pixel(h,
i)); // blue channel
((char*)subImageNose->imageData)[i*result.width()*3 + j + 1] = qGreen(result.pixel(h,
i)); // green channel
((char*)subImageNose->imageData)[i*result.width()*3 + j + 2] = qRed(result.pixel(h,i)); //
red channel
h++;
}h=faceX;
}
//cvSaveImage("nose.jpg",subImageNose);
dNose = new Detector(subImageNose);
dNose->DetectAndDraw('N',&px,&py,&imHeight,&imWidth);
// Move the nodes where we think they should be... important, an offset of 0.10*size is added
nodei = 0;
if ((px != 0) and (py != 0))
foreach (NodeAutomatic *node, nodes)
{
if (nodei == 6) node->setPos(px + 0.5 * imWidth, py + 0.5 * imHeight);
node->calculateForces();
nodei++;
}
return 0;
#ifndef WELCOMEWIDGET_H
#define WELCOMEWIDGET_H
#include <QtGui>
#include <QPixmap>
#include <QGraphicsView>
#include <QListView>
#include <QDialog>
#include <QPushButton>
#include <QDesktopWidget>
#include "mainwindow.h"
private:
// MainWindow handle
QMainWindow *mainWin;
void createActions();
void MoveCenterScreen();
public:
WelcomeWidget(QMainWindow *parent);
void mainWindowOrder(int);
int itemSelected(void);
// List with the options
QListWidget *listWidget;
};
#endif
// Master Thesis - StoneFace v.1.0 beta C by Jorge Garcia Bueno
// Universidad Carlos III de Madrid + University of Glasgow 2009
// All rights reserved by the author
// --------------------------------------------------------------
// welcomewidget.cpp - Starting interface that controls the option menu
#include "welcomewidget.h"
WelcomeWidget::WelcomeWidget(QMainWindow *parent)
{
// Remember the handle of the parent for the returning function
mainWin = parent;
void WelcomeWidget::createActions()
{
// Actions related to the buttons clicked
connect(listWidget, SIGNAL(itemSelectionChanged()), mainWin,SLOT(welcomeButton()) );
}
#############################################################################
# Makefile for building: stoneface
# Generated by qmake (2.01a) (Qt 4.5.0) on: Sat May 16 21:05:58 2009
# Project: stoneface.pro
# Template: app
# Command: /usr/bin/qmake -unix -o Makefile stoneface.pro
#############################################################################
CC = gcc
CXX = g++
DEFINES = -DQT_NO_DEBUG -DQT_GUI_LIB -DQT_CORE_LIB -DQT_SHARED
CFLAGS = -pipe -O2 -Wall -W -D_REENTRANT $(DEFINES)
CXXFLAGS = -pipe -O2 -Wall -W -D_REENTRANT $(DEFINES)
INCPATH = -I/usr/share/qt4/mkspecs/linux-g++ -I. -I/usr/include/qt4/QtCore -I/usr/include/qt4/QtGui
-I/usr/include/qt4 -I. -I. -I.
LINK = g++
LFLAGS = -Wl,-O1
LIBS = $(SUBLIBS) -L/usr/lib -lQtGui -lQtCore -lpthread -lm -lcv -lhighgui -lcvaux
AR = ar cqs
RANLIB =
QMAKE = /usr/bin/qmake
TAR = tar -cf
COMPRESS = gzip -9f
COPY = cp -f
SED = sed
COPY_FILE = $(COPY)
COPY_DIR = $(COPY) -r
INSTALL_FILE = install -m 644 -p
INSTALL_DIR = $(COPY_DIR)
INSTALL_PROGRAM = install -m 755 -p
DEL_FILE = rm -f
SYMLINK = ln -sf
DEL_DIR = rmdir
MOVE = mv -f
CHK_DIR_EXISTS= test -d
MKDIR = mkdir -p
OBJECTS_DIR = ./
####### Files
SOURCES = automaticwidget.cpp \
cameraCapture.cpp \
detector.cpp \
eigenfaces.cpp \
graphwidget.cpp \
main.cpp \
mainwindow.cpp \
mixer.cpp \
mixeraut.cpp \
node.cpp \
nodeaut.cpp \
searchwidget.cpp \
stonefacewidget.cpp \
welcomewidget.cpp moc_automaticwidget.cpp \
moc_eigenfaces.cpp \
moc_graphwidget.cpp \
moc_mainwindow.cpp \
moc_mixeraut.cpp \
moc_searchwidget.cpp \
moc_stonefacewidget.cpp \
moc_welcomewidget.cpp
OBJECTS = automaticwidget.o \
cameraCapture.o \
detector.o \
eigenfaces.o \
graphwidget.o \
main.o \
mainwindow.o \
mixer.o \
mixeraut.o \
node.o \
nodeaut.o \
searchwidget.o \
stonefacewidget.o \
welcomewidget.o \
moc_automaticwidget.o \
moc_eigenfaces.o \
moc_graphwidget.o \
moc_mainwindow.o \
moc_mixeraut.o \
moc_searchwidget.o \
moc_stonefacewidget.o \
moc_welcomewidget.o
DIST = /usr/share/qt4/mkspecs/common/g++.conf \
/usr/share/qt4/mkspecs/common/unix.conf \
/usr/share/qt4/mkspecs/common/linux.conf \
/usr/share/qt4/mkspecs/qconfig.pri \
/usr/share/qt4/mkspecs/features/qt_functions.prf \
/usr/share/qt4/mkspecs/features/qt_config.prf \
/usr/share/qt4/mkspecs/features/exclusive_builds.prf \
/usr/share/qt4/mkspecs/features/default_pre.prf \
/usr/share/qt4/mkspecs/features/release.prf \
/usr/share/qt4/mkspecs/features/default_post.prf \
/usr/share/qt4/mkspecs/features/warn_on.prf \
/usr/share/qt4/mkspecs/features/qt.prf \
/usr/share/qt4/mkspecs/features/unix/thread.prf \
/usr/share/qt4/mkspecs/features/moc.prf \
/usr/share/qt4/mkspecs/features/resources.prf \
/usr/share/qt4/mkspecs/features/uic.prf \
/usr/share/qt4/mkspecs/features/yacc.prf \
/usr/share/qt4/mkspecs/features/lex.prf \
stoneface.pro
QMAKE_TARGET = stoneface
DESTDIR =
TARGET = stoneface
first: all
####### Implicit rules
.cpp.o:
$(CXX) -c $(CXXFLAGS) $(INCPATH) -o "$@" "$<"
.cc.o:
$(CXX) -c $(CXXFLAGS) $(INCPATH) -o "$@" "$<"
.cxx.o:
$(CXX) -c $(CXXFLAGS) $(INCPATH) -o "$@" "$<"
.C.o:
$(CXX) -c $(CXXFLAGS) $(INCPATH) -o "$@" "$<"
.c.o:
$(CC) -c $(CFLAGS) $(INCPATH) -o "$@" "$<"
$(TARGET): $(OBJECTS)
$(LINK) $(LFLAGS) -o $(TARGET) $(OBJECTS) $(OBJCOMP) $(LIBS)
dist:
@$(CHK_DIR_EXISTS) .tmp/stoneface1.0.0 || $(MKDIR) .tmp/stoneface1.0.0
$(COPY_FILE) --parents $(SOURCES) $(DIST) .tmp/stoneface1.0.0/ && $(COPY_FILE) --parents automaticwidg
et.h cameraCapture.h detector.h eigenfaces.h graphwidget.h mainwindow.h mixer.h mixeraut.h node.h nodeaut.
h searchwidget.h stonefacewidget.h welcomewidget.h /usr/include/opencv/cv.h /usr/include/opencv/cxcore.h /
usr/include/opencv/cxtypes.h /usr/include/opencv/cxerror.h /usr/include/opencv/cvver.h /usr/include/openc
v/cxcore.hpp /usr/include/opencv/cvtypes.h /usr/include/opencv/cv.hpp /usr/include/opencv/cvcompat.h /usr/
include/opencv/highgui.h /usr/include/opencv/cvaux.h /usr/include/opencv/cvaux.hpp /usr/include/opencv/cvv
idsurv.hpp .tmp/stoneface1.0.0/ && $(COPY_FILE) --parents automaticwidget.cpp cameraCapture.cpp detector.c
pp eigenfaces.cpp graphwidget.cpp main.cpp mainwindow.cpp mixer.cpp mixeraut.cpp node.cpp nodeaut.cpp sear
chwidget.cpp stonefacewidget.cpp welcomewidget.cpp .tmp/stoneface1.0.0/ && (cd `dirname .tmp/stoneface1.0.
0` && $(TAR) stoneface1.0.0.tar stoneface1.0.0 && $(COMPRESS) stoneface1.0.0.tar) && $(MOVE) `dirname .tm
p/stoneface1.0.0`/stoneface1.0.0.tar.gz . && $(DEL_FILE) -r .tmp/stoneface1.0.0
clean:compiler_clean
-$(DEL_FILE) $(OBJECTS)
-$(DEL_FILE) *~ core *.core
####### Sub-libraries
distclean: clean
-$(DEL_FILE) $(TARGET)
-$(DEL_FILE) Makefile
moc_eigenfaces.cpp: /usr/include/opencv/cv.h \
/usr/include/opencv/cxcore.h \
/usr/include/opencv/cxtypes.h \
/usr/include/opencv/cxerror.h \
/usr/include/opencv/cvver.h \
/usr/include/opencv/cxcore.hpp \
/usr/include/opencv/cvtypes.h \
/usr/include/opencv/cv.hpp \
/usr/include/opencv/cvcompat.h \
/usr/include/opencv/cvaux.h \
/usr/include/opencv/cvaux.hpp \
/usr/include/opencv/cvvidsurv.hpp \
/usr/include/opencv/highgui.h \
searchwidget.h \
mainwindow.h \
graphwidget.h \
node.h \
detector.h \
mixer.h \
cameraCapture.h \
eigenfaces.h \
welcomewidget.h \
automaticwidget.h \
stonefacewidget.h \
nodeaut.h \
mixeraut.h \
eigenfaces.h
/usr/bin/moc-qt4 $(DEFINES) $(INCPATH) eigenfaces.h -o moc_eigenfaces.cpp
moc_graphwidget.cpp: node.h \
detector.h \
/usr/include/opencv/cv.h \
/usr/include/opencv/cxcore.h \
/usr/include/opencv/cxtypes.h \
/usr/include/opencv/cxerror.h \
/usr/include/opencv/cvver.h \
/usr/include/opencv/cxcore.hpp \
/usr/include/opencv/cvtypes.h \
/usr/include/opencv/cv.hpp \
/usr/include/opencv/cvcompat.h \
/usr/include/opencv/highgui.h \
graphwidget.h
/usr/bin/moc-qt4 $(DEFINES) $(INCPATH) graphwidget.h -o moc_graphwidget.cpp
moc_mainwindow.cpp: graphwidget.h \
node.h \
detector.h \
/usr/include/opencv/cv.h \
/usr/include/opencv/cxcore.h \
/usr/include/opencv/cxtypes.h \
/usr/include/opencv/cxerror.h \
/usr/include/opencv/cvver.h \
/usr/include/opencv/cxcore.hpp \
/usr/include/opencv/cvtypes.h \
/usr/include/opencv/cv.hpp \
/usr/include/opencv/cvcompat.h \
/usr/include/opencv/highgui.h \
mixer.h \
cameraCapture.h \
eigenfaces.h \
/usr/include/opencv/cvaux.h \
/usr/include/opencv/cvaux.hpp \
/usr/include/opencv/cvvidsurv.hpp \
searchwidget.h \
mainwindow.h \
welcomewidget.h \
automaticwidget.h \
stonefacewidget.h \
nodeaut.h \
mixeraut.h \
mainwindow.h
/usr/bin/moc-qt4 $(DEFINES) $(INCPATH) mainwindow.h -o moc_mainwindow.cpp
moc_mixeraut.cpp: stonefacewidget.h \
nodeaut.h \
detector.h \
/usr/include/opencv/cv.h \
/usr/include/opencv/cxcore.h \
/usr/include/opencv/cxtypes.h \
/usr/include/opencv/cxerror.h \
/usr/include/opencv/cvver.h \
/usr/include/opencv/cxcore.hpp \
/usr/include/opencv/cvtypes.h \
/usr/include/opencv/cv.hpp \
/usr/include/opencv/cvcompat.h \
/usr/include/opencv/highgui.h \
automaticwidget.h \
mainwindow.h \
graphwidget.h \
node.h \
mixer.h \
cameraCapture.h \
eigenfaces.h \
/usr/include/opencv/cvaux.h \
/usr/include/opencv/cvaux.hpp \
/usr/include/opencv/cvvidsurv.hpp \
searchwidget.h \
welcomewidget.h \
mixeraut.h \
mixeraut.h
/usr/bin/moc-qt4 $(DEFINES) $(INCPATH) mixeraut.h -o moc_mixeraut.cpp
moc_searchwidget.cpp: mainwindow.h \
graphwidget.h \
node.h \
detector.h \
/usr/include/opencv/cv.h \
/usr/include/opencv/cxcore.h \
/usr/include/opencv/cxtypes.h \
/usr/include/opencv/cxerror.h \
/usr/include/opencv/cvver.h \
/usr/include/opencv/cxcore.hpp \
/usr/include/opencv/cvtypes.h \
/usr/include/opencv/cv.hpp \
/usr/include/opencv/cvcompat.h \
/usr/include/opencv/highgui.h \
mixer.h \
cameraCapture.h \
eigenfaces.h \
/usr/include/opencv/cvaux.h \
/usr/include/opencv/cvaux.hpp \
/usr/include/opencv/cvvidsurv.hpp \
searchwidget.h \
welcomewidget.h \
automaticwidget.h \
stonefacewidget.h \
nodeaut.h \
mixeraut.h \
searchwidget.h
/usr/bin/moc-qt4 $(DEFINES) $(INCPATH) searchwidget.h -o moc_searchwidget.cpp
moc_stonefacewidget.cpp: nodeaut.h \
stonefacewidget.h \
detector.h \
/usr/include/opencv/cv.h \
/usr/include/opencv/cxcore.h \
/usr/include/opencv/cxtypes.h \
/usr/include/opencv/cxerror.h \
/usr/include/opencv/cvver.h \
/usr/include/opencv/cxcore.hpp \
/usr/include/opencv/cvtypes.h \
/usr/include/opencv/cv.hpp \
/usr/include/opencv/cvcompat.h \
/usr/include/opencv/highgui.h \
stonefacewidget.h
/usr/bin/moc-qt4 $(DEFINES) $(INCPATH) stonefacewidget.h -o moc_stonefacewidget.cpp
moc_welcomewidget.cpp: mainwindow.h \
graphwidget.h \
node.h \
detector.h \
/usr/include/opencv/cv.h \
/usr/include/opencv/cxcore.h \
/usr/include/opencv/cxtypes.h \
/usr/include/opencv/cxerror.h \
/usr/include/opencv/cvver.h \
/usr/include/opencv/cxcore.hpp \
/usr/include/opencv/cvtypes.h \
/usr/include/opencv/cv.hpp \
/usr/include/opencv/cvcompat.h \
/usr/include/opencv/highgui.h \
mixer.h \
cameraCapture.h \
eigenfaces.h \
/usr/include/opencv/cvaux.h \
/usr/include/opencv/cvaux.hpp \
/usr/include/opencv/cvvidsurv.hpp \
searchwidget.h \
welcomewidget.h \
automaticwidget.h \
stonefacewidget.h \
nodeaut.h \
mixeraut.h \
welcomewidget.h
/usr/bin/moc-qt4 $(DEFINES) $(INCPATH) welcomewidget.h -o moc_welcomewidget.cpp
compiler_rcc_make_all:
compiler_rcc_clean:
compiler_image_collection_make_all: qmake_image_collection.cpp
compiler_image_collection_clean:
-$(DEL_FILE) qmake_image_collection.cpp
compiler_moc_source_make_all:
compiler_moc_source_clean:
compiler_uic_make_all:
compiler_uic_clean:
compiler_yacc_decl_make_all:
compiler_yacc_decl_clean:
compiler_yacc_impl_make_all:
compiler_yacc_impl_clean:
compiler_lex_make_all:
compiler_lex_clean:
compiler_clean: compiler_moc_header_clean
####### Compile
moc_automaticwidget.o: moc_automaticwidget.cpp
$(CXX) -c $(CXXFLAGS) $(INCPATH) -o moc_automaticwidget.o moc_automaticwidget.cpp
moc_eigenfaces.o: moc_eigenfaces.cpp
$(CXX) -c $(CXXFLAGS) $(INCPATH) -o moc_eigenfaces.o moc_eigenfaces.cpp
moc_graphwidget.o: moc_graphwidget.cpp
$(CXX) -c $(CXXFLAGS) $(INCPATH) -o moc_graphwidget.o moc_graphwidget.cpp
moc_mainwindow.o: moc_mainwindow.cpp
$(CXX) -c $(CXXFLAGS) $(INCPATH) -o moc_mainwindow.o moc_mainwindow.cpp
moc_mixeraut.o: moc_mixeraut.cpp
$(CXX) -c $(CXXFLAGS) $(INCPATH) -o moc_mixeraut.o moc_mixeraut.cpp
moc_searchwidget.o: moc_searchwidget.cpp
$(CXX) -c $(CXXFLAGS) $(INCPATH) -o moc_searchwidget.o moc_searchwidget.cpp
moc_stonefacewidget.o: moc_stonefacewidget.cpp
$(CXX) -c $(CXXFLAGS) $(INCPATH) -o moc_stonefacewidget.o moc_stonefacewidget.cpp
moc_welcomewidget.o: moc_welcomewidget.cpp
$(CXX) -c $(CXXFLAGS) $(INCPATH) -o moc_welcomewidget.o moc_welcomewidget.cpp
####### Install
install: FORCE
uninstall: FORCE
FORCE: