0% found this document useful (0 votes)
10 views17 pages

Lifespan Age Transformation Synthesis

Bbb

Uploaded by

viralsdd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views17 pages

Lifespan Age Transformation Synthesis

Bbb

Uploaded by

viralsdd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Lifespan Age Transformation Synthesis

Roy Or-El1 , Soumyadip Sengupta1 , Ohad Fried2 ,


Eli Shechtman3 , and Ira Kemelmacher-Shlizerman1
1 2 3
University of Washington Stanford University Adobe Research

Abstract. We address the problem of single photo age progression and


regression—the prediction of how a person might look in the future, or
how they looked in the past. Most existing aging methods are limited
to changing the texture, overlooking transformations in head shape that
occur during the human aging and growth process. This limits the appli-
cability of previous methods to aging of adults to slightly older adults,
and application of those methods to photos of children does not produce
quality results. We propose a novel multi-domain image-to-image genera-
tive adversarial network architecture, whose learned latent space models
a continuous bi-directional aging process. The network is trained on the
FFHQ dataset, which we labeled for ages, gender, and semantic segmen-
tation. Fixed age classes are used as anchors to approximate continuous
age transformation. Our framework can predict a full head portrait for
ages 0–70 from a single photo, modifying both texture and shape of the
head. We demonstrate results on a wide variety of photos and datasets,
and show significant improvement over the state of the art.

1 Introduction
Age transformation is a problem of synthesizing a person’s appearance in a
different age while preserving their identity. Once the age gap between the input
and the desired output is significant, e.g., going from 1 to 15 year old, the problem
becomes highly challenging due to pronounced changes in head shape as well as
facial texture. Solving for shape and texture together remains an open problem.
Particularly, if the method is required to create a lifespan of transformations,
i.e., for any given input age, the method should synthesize a full span of 0–70
ages (rather than binary young-to-old transformations). In this paper, we aim
to enable exactly that—lifespan of transformations from a single portrait.
State of the art methods [43,1,48,31,46,44,13] focus on either minor age gaps,
or mostly on adults to elderly progression, as a large part of the aging transfor-
mation for adults lies in the texture (rather than shape), e.g., adding wrinkles.
The method of Kemelmacher-Shlizerman et al. [20] allows substantial age trans-
formations but it can be applied only on a cropped face area, rather than a
full head, and cannot be modified to allow backward age prediction (adult to
child) due to optical-flow-based nature of the method. Apps like FaceApp allow
considerable transitions from adult to child and vice versa, but similar to state
2 Or-El R. et al.

Input Image Ages 0– 2 Ages 3 – 6 Ages 7– 9 Ages 15 – 19 Ages 30 –39 Ages 50 –69

Fig. 1: Given a single portrait photo (left), our generative adversarial network
predicts full head bi-directional age transformation. Note the diversity of the
input photos spanning ethnicity, age (baby, young, adult, senior), gender, and
facial expression. Gray border marks the input class, columns 2-7 are all synthe-
sized by our method.

of the art methods they focus on texture, not shape, and thus produce sub-par
results, in addition to focusing only on the binary case of two ambiguous age
classes (“young”, “old”).
Theoretically, since time is a continuous variable, lifespan age transforma-
tion, e.g., 0–70 synthesis, should be modeled as a continuous process. However,
it can be very difficult to learn without large datasets of identity-specific ground
truth (same person captured over their lifespan). Therefore, we approximate
this continuous transformation by representing age with a fixed number of an-
chor classes in a multi-domain transfer setting. We represent age with six anchor
classes: three for children ages 0–2, 3–6, 7–9, one for young people 15–19, one
for adults 30–39, and one for 50–69. Those classes are designed to learn geo-
Lifespan Age Transformation Synthesis 3

metric transformation in ages where most prominent shape changes occur, while
covering the full span of ages in the latent space.
To that end, we propose a new multi-domain image-to-image conditional
GAN architecture (Fig. 2). Our main encoder—the identity encoder—encodes
the input image to extract features associated with the person’s identity. Next,
unlike other multi-domain approaches, the various age domains are each repre-
sented by a unique distribution. Given a target age, it is assigned an age vector
code sampled from the appropriate distribution. The age code is sent to a map-
ping network, that maps age codes into a unified, learned latent space. The re-
sulting latent space approximates continuous age transformations. Our decoder
then fuses the learned latent age representation with the identity features via
StyleGAN2’s [19] modulated convolutions.
Disentanglement based domain transfer approaches such as MUNIT [15] and
FUNIT [27] can learn shape and texture deformation, e.g., transform cats to
dogs. However, these methods cannot be directly applied to transform age, in
a multi-domain setting, due to key limiting assumptions: MUNIT requires two
generators per domain pair, thus training it for even 6 age classes will require
30 generators, trumping scalability. FUNIT requires an exemplar image of the
target class and is not guaranteed to apply only age features from the exemplar,
as other attributes like skin color, gender and ethnicity may also be transferred.
On the other hand, multi-domain transfer algorithms such as StarGAN [7]
and STGAN [26] assume the domains to be distinct and encompassing contrast-
ing facial attributes. Age domains are highly correlated however, and thus those
algorithms struggle with the age transformation task. Methods like InterFace-
GAN [37] aim to address that via latent space traversal of an unconditionally
trained GAN. However, navigating these paths to transform a person into a
specific age is difficult, as the computed traversal path does not always pre-
serve identity characteristics. In contrast, our proposed algorithm can transform
shape and texture across a wide range of ages while still maintaining the person’s
identity and attributes.
Another limiting factor in modeling full lifespan age transformations is that
existing face aging datasets contain a very limited amount of babies and children.
To compensate for that, we labeled the FFHQ dataset [18] for gender and age
via crowd-sourcing. In addition, for each image we extracted face semantic maps
as well as head pose angles, glasses type and eye occlusion score.
Qualitative and quantitative evaluations show that our method outperforms
state-of-the-art aging algorithms as well as multi-domain transfer and latent
space traversal methods applied on the face aging task. The key contributions of
this paper are: 1) enabling both shape and texture transformations for lifespan
age synthesis, 2) novel multi-domain image-to-image translation GAN architec-
ture, 3) labelled FFHQ [18] dataset which we will share with the community.
We are aware of the potential ethical issues and potential bias such method
can present. These issues are addressed in our Ethics and Bias statement in the
supplementary material.
4 Or-El R. et al.

2 Related Work
Early works in age progression have focused on building separate models for
specific sub-effects of aging, e.g., wrinkles [45,3,2], cranio-facial growth [33,34],
and face sub-regions [40,39]. Complete face transformation was explored via
calculating average faces of age clusters and transitioning between them [4,36],
wavelet transformation [42], dictionary learning [38], factor analysis [47] and
AAM face parameter fitting [22]. Age progression of children was specifically the
focus in [20], where the aging process was modelled as cascaded flows between
mean faces of pre-computed age clusters of eigenfaces.
Recently, deep learning has become the predominant approach for facial ag-
ing. Wang et al. [43] replaced the cascaded flows from [20] with a series of RNN
forward passes. Zhang et al. [48] and Antipov et al. [1] proposed autoencoder
GAN architecture where aging was performed by adding an age condition to
the latent space. Duong et al. [32,31] introduced a cascade of restricted Boltz-
mann machines and ResNet based probabilistic generative models (respectively)
to carry out the aging process between age groups. Yang et al. [46] proposed
a GAN based architecture with a pyramidal discriminator over age detection
features of the generated aged image. Liu et al. [28] introduced additional age
transition discriminator to supervise the aging transitions between the age clus-
ters. Li et al. [25] fused the outputs from global and local patch generators to
synthesize the aged face. Wang et al. [44] added facial feature loss as well as age
classification loss to enforce the output image to have the same identity while
still progressing the age. Liu et al. [29] added gender and race attributes to their
GAN architecture to help avoid biases in the training data, they also propose
a new wavelet based discriminator to improve image quality. He et al. [13] en-
coded personalized aging basis and apply specific age transforms to create an
age representation used to decode the aged face. The focus of most of those ap-
proaches was on aging adults to elderly (mostly texture changes). Our method
is the first to propose a full lifespan aging, 0–70 years old. We refer the reader
to these excellent surveys [9,10,35] for a broader overview of the advances in age
progression over the years.
Recent success with generative adversarial networks [11] significantly im-
proved image-to-image translations between two domains, with both paired [16]
(Pix2Pix) and unpaired [50] (CycleGAN) training data. More recent methods
disentangled the image into style and content latent spaces, e.g., MUNIT [15],
DRIT [24], share the content space but create multiple disjoint style latent
spaces. These methods are hard to scale to a large number of domains as they
require training two generators per pair of domains. FUNIT [27] used a sin-
gle generator that disentangled the image into shared content and style latent
spaces, however, it required an additional target image to explicitly encode the
style. One may consider aging effects as “style”. However when transferring
style between two age domains, non-age related styles, like skin color, gender
and ethnicity, might be transferred as well. Multi-domain transfer algorithms
like StarGAN [8] and STGAN [26] can edit multiple facial attributes, but those
are assumed to be distinct and contrasting. StarGAN generalizes CycleGAN to
Lifespan Age Transformation Synthesis 5

map an input image into multiple domains using a single generator. STGAN
uses a selective transfer units with encoder-decoder architecture to select and
modify encoded features for attribute editing. These methods however are not
designed to work on the age translation task, as aging domains are highly corre-
lated and not distinct. Our proposed architecture enables translations between
highly correlated domains, and obtains a continuous traversable age latent space
while maintaining identity and image quality.

3 Algorithm

3.1 Overview

Our main goal is to design an algorithm that can learn the head shape defor-
mation as well as appearance changes across a wide range of ages. Ideally, one
would turn to supervised learning to tackle this problem. However, since this
process is continuous in nature, it requires a large amount of aligned image pairs
of the same person at different ages that will span all possible transitions. Un-
fortunately, there are no existing large-scale datasets that capture aging changes
over more than several years, let alone an entire life span. Furthermore, small
scale datasets like FGNET [22] capture subjects in different poses, environments
and lighting conditions, making supervised training very challenging. To this
end, we turn to adversarial learning and leverage the recent progress in unpaired
image-to-image translation GAN architectures [41,50,7,15,24,27]. We propose to
approximate the continuous aging process with six anchor age classes which
results in a multi-domain transfer problem.
We propose a novel generative adversarial network architecture that consists
of a single conditional generator and a single discriminator. The conditional
generator is responsible for transitions across age groups, and consists of three
parts: identity encoder, a mapping network, and a decoder. We assume that while
a person’s appearance changes with age their identity remains fixed. Therefore,
we encode age and identity in separate paths.
Each age group is represented by a unique pre-defined distribution. When
given a target age, we sample from the respective age group distribution and
assign it a vector age code. The age code is sent to a mapping network, that maps
it into a learned unified age latent space. The resulting latent space approximates
continuous age transformations. The input image is processed separately by the
identity encoder to extract identity features. The decoder takes the mapping
network output, a target age latent vector, and injects it to the identity features
using modulated convolutions, originally proposed in StyleGAN2 [19]. During
training, we use an additional age encoder to relate between real and generated
images to the pre-defined distributions of their respective age class.
For transformation to an age not represented in our anchor classes, we cal-
culate age latent codes for its two neighboring anchor classes and perform linear
interpolation to get the desired age code as input to the decoder.
6 Or-El R. et al.

Fig. 2: Algorithm overview.

3.2 Framework
Our algorithm takes a facial image x and a target age cluster as inputs. It then
generates an output image of the same person at the desired age cluster. Fig. 2
shows the model architecture as well as the training scheme.
As a pre-processing step, background and clothing items are removed from
the image using its corresponding semantic mask, which is part of our dataset
(see Sec. 4 for details). Our age input space, Z, is represented by a 50×n element
vector where n is the number of age classes. When the input age class is i, we
generate a vector zi ∈ Z as

zi = 1i + v, v ∼ N (0, 0.22 · I) (1)

where 1i is a 50 · n element vector that contains ones on elements 50 · i through


50·(i+1)−1 and zeros elsewhere, and I is the identity matrix. A single generator
is used to generate all target ages. Our generator consists of an identity encoder,
a latent mapping network and a decoder. During training we also use an age
encoder to embed both real and generated images into the age latent space.
Identity encoder. The identity encoder Eid takes an input image x and
extracts an identity features tensor wid , where wid = Eid (x). these features
contain information about the image local structures and the general shape of
the face which play a key role in generating the same identity. The identity
encoder contains two downsampling layers followed by four residual blocks [12].
Mapping network. The mapping network M : Z → Wage embeds an age
input vector z to the unified age latent space Wage , wage = M (z), where M is
an 8 layer MLP network and wage is a 256 element latent vector. The mapping
network learns an optimal age latent space that enables a smooth transition and
interpolation between age clusters, needed for continuous age transformations.
Decoder. Our decoder takes an age latent code along with identity features
and produces an output image y = F (wid , wage ). The identity features wid are
Lifespan Age Transformation Synthesis 7

processed by styled convolution blocks [18]. To reduce water droplet [19] artifacts,
we replace the AdaIN normalization layers [14] with modulated convolution lay-
ers proposed in StyleGAN2 [19]. In addition, each modulated convolution layer is
followed by a pixel norm layer [17] as we observed it further helps reducing these
artifacts. We omit the noise injection in our implementation. Overall, we use four
styled convolution layers to manipulate the identity code and two upsampling
styled convolution layers to produce an image at the original size.
The overall generator mapping from an input image x and an input target
age vector zt to an output image y is:

y = G(x, zt ) = F (Eid (x), M (zt )). (2)

Age encoder. The age encoder enforces a mapping of the input image x
into its correct location in the age vector space Z. It produces an age vector
zs = Eage (x) that corresponds to the source age cluster s of the image x. The
age encoder needs to capture more global data in order to encode the general
appearance, regardless of the identity. To this end, we follow the architecture of
MUNIT [15]’s style encoder with four downsampling layers, followed by global
averaging and a fully connected layer to produce an age vector. Note that the
age encoder is not used for inference.
Discriminator. We use the StyleGAN discriminator [18] with minibatch
standard deviation. We modify the last fully connected layer to have n outputs
in order to discriminate multiple classes as suggested by Liu et al. [27]. For a
real image from class i, we only penalize the i-th output. Respectively, only the
j-th output is penalized for a generated image of class j.

3.3 Training Scheme

An overview of the training scheme can be seen in Fig. 2. To compensate for


imbalances between age clusters, in each training iteration, we first sample a
source cluster s and a target cluster t (t 6= s). Then we sample an image from
each class. We then perform three forward passes:

ygen = G(x, zt ), yrec = G(x, zs ), ycyc = G(ygen , zs ). (3)

Here, ygen is the generated image at target age t and yrec is the reconstructed
image at source age s. We also apply a cycle to reconstruct ycyc at source age
s from generated image ygen at age t. These passes provide us with all the
necessary signals to minimize the following loss functions.
Adversarial loss. We use an adversarial loss conditioned on the source and
target age cluster of the real and fake images respectively,

Ladv (G, D) = Ex,s [log Ds (x)] + Ex,t [log(1 − Dt (ygen )], (4)

where Di is the i-th output of the discriminator, s is the source age cluster of
the real image and t is the target cluster for the generated image.
8 Or-El R. et al.

Self reconstruction loss. This loss is used to force the generator to learn
the identity translation. When the given target age cluster is the same as the
source cluster, we minimize
Lrec (G) = kx − yrec k1 . (5)
Cycle loss. To help identity preservation as well as a consistent skin tone
we employ the cycle consistency loss [50],
Lcyc (G) = kx − ycyc k1 . (6)
Identity feature loss. To make sure the generator keeps the identity of the
person throughout the aging process, we minimize the L1 distance between the
identity features of the original image and those of the generated image,
Lid (G) = kEid (x) − Eid (ygen )k1 . (7)
Age vector loss. We enforce a correct embedding of real and generated
images to the input age space by penalizing the distance between the age encoder
outputs and the age vectors zs , zt that were sampled to generate outputs at the
source and target age clusters respectively. The loss is defined as
Lage (G) = kEage (x) − zs k1 + kEage (ygen ) − zt k1 . (8)
The overall optimization function is
min max Ladv (G, D) + λrec Lrec (G)+
G D
(9)
λcyc Lcyc (G) + λid Lid (G) + λage Lage (G).

3.4 Implementation details


We train 2 separate models, one for males and one for females. Each model was
trained with a batch size of 12 for 400 epochs on 4 GeForce RTX 2080 Ti GPUs.
We use the Adam optimizer [21] with β1 = 0, β2 = 0.999 and a learning rate
of 10−3 . The learning rate is decayed by 0.5 after 50 and 100 epochs. Similar
to StyleGAN [18], we apply the non-saturating adversarial loss [11] with R1
regularization [30]. In addition, we also reduce the learning rate of the mapping
network M by a factor of 0.01 and employ exponential moving average for the
generator weights. We set λrec = 10, λcyc = 10, λid = 1, λage = 1. We refer the
readers to the supplementary material for architecture details of each component
in our framework. The code and pre-trained models are publicly available.

4 Dataset
We introduce a new facial aging dataset based on images from FFHQ [18],
‘FFHQ-Aging’. We used the Appen1 crowd-sourcing platform to annotate gen-
der and age cluster for all 70, 000 images on FFHQ, collecting 3 judgements
1
https://www.appen.com/
Lifespan Age Transformation Synthesis 9

Fig. 3: FFHQ-Aging dataset. We label 70k images from FFHQ dataset [18] for
gender and age via crowd-sourcing. In addition, for each image we extracted face
semantic maps as well as head pose angles, glasses type and eye occlusion score.

for each image. We defined 10 age clusters that capture both geometric and
appearance changes throughout a person’s life: 0–2, 3–6, 7–9, 10–14, 15–19, 20–
29, 30–39, 40–49, 50–69 and 70+. We trained a DeepLabV3 [6] network on the
CelebAMask-HQ [23] dataset and used the trained model to extract a 19 label
face semantic maps for all 70K images. Finally, we used the Face++2 platform
to get the head pose angles, glasses type (none, normal, or dark) and left and
right eye occlusion scores. We use the same alignment procedure as [17] with a
slightly larger crop size (see supplementary for details). We generated our images
and semantic maps at a resolution of 256x256 but the procedure is applicable to
higher resolutions too. Fig. 3 shows sample image & face semantics pairs from
the dataset. There are 32,170 males and 37,830 females in the dataset, the age
distribution per gender can be seen in the supplementary material. The dataset
is publicly available to the community.
For the purpose of training our network, we assigned images 0–68,999, for
training and images 69,000–69,999 for testing. Then, we pruned images with:
gender confidence below 0.66, age confidence below 0.6, head yaw angle greater
than 40◦ , head pitch angle greater than 30◦ , dark glasses label, and eye occlusion
score greater than 90 and 50 for eye pair. After pruning, we selected 6 age
clusters to train on: 0–2, 3–6, 7–9, 15–19, 30–39, 50–69. This process resulted
in 14,232 male and 14,066 female training images along with 198 male and 205
female images for testing. The pruned training set age distribution per gender
is presented in the supplementary material.

5 Evaluation

5.1 Comparison with Commercial Apps

We perform a qualitative comparison with the outputs of FaceApp3 . FaceApp


provides binary facial aging filters to make people appear younger or older. Fig. 4
shows that although the FaceApp output image quality is high, it cannot perform
shape transformation and is mostly limited to skin texture. For transformations
to an older age, we applied both “old” and “cool old” filters available in FaceApp
and compared against our output for 50–69 age range. For transformations to
a younger age, we applied the “young2” filter which is roughly equivalent to
our 15–19 class. We also show our outputs for the 0–2 class to demonstrate our
2
https://www.faceplusplus.com/
3
https://www.faceapp.com/
10 Or-El R. et al.

Input FaceApp FaceApp Ours Input FaceApp Ours Ours 0–2


Old Cool Old 50–69 Young2 15–19

Fig. 4: Comparison with FaceApp filters. Note that FaceApp cannot deform the
shape of the head or generate extreme ages, e.g. 0–2.

algorithm’s ability to learn head deformation. Even though FaceApp applies a


dedicated filter for each transition, in contrast with our multi-domain generator,
its age filters are still not transforming the shape of the head.

5.2 Comparison with Age transformation methods

We compare our algorithm to three state-of-the-art age transformation methods:


IPCGAN [44], Yang et al. [46], referred as PyGAN, and S2GAN [13].
Qualitative Evaluation. We compare with PyGAN and S2GAN on CACD
dataset [5] on the images showcased by the authors in their papers. We train
on FFHQ and test on CACD, while both PyGAN and S2GAN train on CACD
dataset. Due to copyright issues with CACD images, we cannot present the
comparison figures in this manuscript. We encourage the reader to review these
images in the project’s website. Even though PyGAN is trained with a differ-
ent generator to produce each age cluster, our network is still able to achieve
better photorealism for multiple output classes with a single generator. In com-
parison to S2GAN, our algorithm is able to create more pronounced wrinkles
and facial features as the age progresses, all while spanning wider range of age
transformations.
We also evaluate our performance w.r.t IPCGAN trained on both CACD &
FFHQ-Aging datasets in Fig. 5. Here, we use IPCGAN’s publically available code
and retrain their framework on FFHQ-Aging dataset for fair comparison (termed
as ‘IPCGAN-retrained’). Our method outperforms both IPCGAN models in
terms of image quality and shape deformation.
User Study In addition, we performed a user study to evaluate PyGAN
results vs. our results. In the study we measure: (a) how well does the method
preserve the identity of the person in the photo, (b) how close is the perceived age
to the target age, and (c) overall which result is better. Our hypothesis was that
PyGAN will excel in identity preservation but not on the other metrics, since
PyGAN tends to keep the results close to the input photo (and thus cannot
perform large age changes).
Lifespan Age Transformation Synthesis 11

Input 0-2 3-6 7-9 15-19 30-39 50+ 15-19 30-39 50+ 0-2 3-6 7-9 15-19 30-39 50+

Ours IPCGAN IPCGAN-retrained

Fig. 5: Comparison w.r.t. IPCGAN [44] on the FFHQ-Aging dataset. Left: our
method. Middle: IPCGAN trained on CACD. Right: IPCGAN trained on FFHQ-
Aging. The proposed framework outperforms IPCGAN, producing sharper, more
detailed and more realistic outputs across all age classes.

Age range: 50–69


PyGAN [46] Ours
Same identity 19 13
Age difference 23.1 6.9
Overall better 4 16
Table 1: User study results vs. PyGAN [46]. PyGAN is expectedly better at
identity preservation, at the cost of not generating the target age (mean age
difference 23.1, compared to our 6.9). When asked which is better overall, users
prefered our results in 16 out of 20 cases.

To measure identity preservation, we show the input and output photos and
ask if the two contain the same person. To measure age accuracy, we show the
output photo and ask the age of the person, selected from a list of age ranges.
To measure overall quality, we show an input photo, and below it a PyGAN
result and our result side-by-side in a randomized order, and ask which result is
a better version of the input person in the target age range. We used Amazon
Mechanical Turk to collect answers for 20 randomly selected images from FFHQ-
Aging dataset, repeating each question 5 times, for a total of 500 unique answers.
We show the user study interface in supplemental material.
User study results are presented in Table 1. As expected, PyGAN preserves
subject identity more often (in 19 out of 20 cases, compared to 13 for our
method). This comes at a cost of much larger age gaps: the perceived age of
PyGAN results is on average 23.1 years away from the target age, compared to
6.9 years for our results. Since identity preservation and age preservation may
conflict, we also asked participants to evaluate which result is better overall. For
16 out of 20 test photos, our results were rated as better than PyGAN.
In a second user study, we compare our results to those of IPCGAN trained
on CACD dataset. We report results per age range, as well as overall results
(Table 2). We collected answers for 3 age ranges, 50 randomly selected images
per range, repeating each question 3 times, for a total of 2250 unique answers.
Similarly to PyGAN, IPCGAN better preserves identity (in 100% of the cases)
at the cost of age inaccuracies (results are on average 22.6 years away from the
12 Or-El R. et al.

Age range: 15–19 30–39 50–69 All


IPCGAN Ours IPCGAN Ours IPCGAN Ours IPCGAN Ours
Same identity 50 50 50 45 50 41 150 136
Age difference 19.3 12.7 20.0 11.6 28.4 9.8 22.6 11.3
Overall better 9 40 8 42 10 38 27 120
Table 2: User study results vs. IPCGAN [44] for three age groups. IPCGAN
is expectedly better at identity preservation, at the cost of not generating the
target age (mean age difference 22.6, compared to our 11.3). When asked which
is better overall, users prefered our results in 120 out of 150 cases.

StarGAN

STGAN

Ours

StarGAN

STGAN

Ours

Fig. 6: Comparison with multi domain transfer methods. The leftmost column
is the input, followed by transformations to age classes 0–2, 3–6, 7–9, 15–19,
30–39 and 50–69 respectively. Multi domain transfer methods struggle to model
the gradual head deformation associated with age progression. Our method also
produces better images in terms of quality and photorealism, while correctly
modeling the growth of the head compared to StarGAN [7] and STGAN [26].

target age). When asked which result is better overall, participants picked our
results in 120 cases, compared to 27 for IPCGAN.
Comparison with Multi-class Domain transfer methods To validate
our claim that multi-domain transfer methods struggle with shape deformations,
we compare our algorithm against 2 state-of-the-art baselines, StarGAN [7] and
STGAN [26]. We retrain both algorithms on our FFHQ-Aging dataset using the
same pre-processing procedure (see Sec. 3.2) to mask background and clothes
and the same sampling technique to compensate for dataset imbalances (see
Sec. 3.3). Fig. 6 shows that although STGAN occasionally deforms the shape for
the 0–2 class, both StarGAN and STGAN cannot produce a consistent shape
transformation across age classes.
Latent space interpolation We show our method’s ability to generalize
and produce continuous age transformations by interpolation in the Wage latent
Lifespan Age Transformation Synthesis 13

[37]
λ ∈ [−3, 3]

Ours

[37]
λ ∈ [−3, 3]

Ours

[37]
λ ∈ [−5, 5]

Ours

[49] & [37]

λ ∈ [−20, 20]

Ours
Input 0-2 3-6 7-9 15-19 30-39 50-69

Fig. 7: Comparison with InterFaceGAN [37]. Age cluster legend only applies
to our method. Rows 1,3,5: results on StyleGAN generated images. Row 7:
result on a real image, embedded into the StyleGAN latent space using LIA [49].
Rows 2,4,6: Our result on StyleGAN generated images. Bottom row: Our result
on a real image. Existing state-of-the-art interpolation methods cannot maintain
the identity (rows 1,3,5,7), gender (row 5) and ethnicity (rows 3,5) of the input
image. In addition, as seen in rows 1 & 3, using the same λ values on different
photos produces different age ranges.

space. Interpolation between two neighboring age classes is done by generating


t t+1
two age latent codes wage , wage , where t, t + 1 are adjacent age classes, and then
t t+1
generating the desired interpolated code wage = (1 − α) · wage + α · wage . The
rest of the process is identical to Sec. 3. We compare our results on two possible
setups. In the first case, we demonstrate that the StyleGAN [18] latent space
paths found by InterFaceGAN [37] cannot maintain identity, gender and race.
We sample a latent code in z space of StyleGAN, which generates a realistic
face image. We use the latent space boundary from InterFaceGAN, which can
change age by preserving gender, and edit the sampled latent code to produce
both younger and older versions of the face using λ ∈ [−3, +3] in z space. In
the second setup, we compare against real images embedded into StyleGAN’s
w latent space using the LIA [49] framework. We then change the age of the
embedded face by traversing on the latent space across the age boundary learnt
with InterFaceGAN on w space (this boundary was not gender conditioned).
We use λ ∈ [−20, +20] in w space to generate younger and older versions. In
Fig. 7 we can see that despite the excellent photorealism of InterFaceGAN on
14 Or-El R. et al.

Fig. 8: Limitations. Our network struggles to generalize extreme poses (top row),
removing glasses (left column), removing thick beards (bottom right) and occlu-
sions (top left).

generated faces, the person’s identity is lost, and in some occasions the gender
is lost too. In addition, note how InterFaceGAN requires different λ values for
each input in order to achieve full lifespan transformation, as opposed to our
consistent age outputs in the traversal paths.

5.3 Limitations
While our network can generalize age transformations, it has limitations in gen-
eralizing for other potential cases such as extreme poses, removing glasses and
thick beards when rejuvenating a person, and handling occluded faces. Fig. 8
shows a representative example for each such case. We suspect that these issues
stem from a combination of using just two downsampling layers in the identity
encoder and the latent identity loss. The former creates relatively local feature
maps, while the latter enforces the latent identity spatial representations of two
difference age classes to be the same, which in turn, limits the network’s ability
to generalize for these cases.

6 Conclusions
We presented an algorithm that produces reliable age transformations for ages
0–70. Unlike previous approaches, our framework learns to change both shape
and texture as part of the aging process. The proposed architecture and training
scheme accurately generalize age, and thus, we can produce results for ages
never seen during training via latent space interpolation. In addition, we also
introduced a new facial dataset that can be used by the vision community for
various tasks. As demonstrated in our experiments, our method produces state-
of-the-art results.

Acknowledgements
We wish to thank Xuan Luo and Aaron Wetzler for their valuable discussions and
advice, and Thevina Dokka for her help in building the FFHQ-Aging dataset.
This work was supported in part by Futurewei Technologies. O.F. was supported
by the Brown Institute for Media Innovation. All images in this manuscript are
licensed under creative commons license and were taken from the FFHQ dataset.
Lifespan Age Transformation Synthesis 15

References
1. Antipov, G., Baccouche, M., Dugelay, J.L.: Face aging with conditional generative
adversarial networks. arXiv preprint arXiv:1702.01983 (2017) 1, 4
2. Bando, Y., Kuratate, T., Nishita, T.: A simple method for modeling wrinkles on
human skin. In: 10th Pacific Conference on Computer Graphics and Applications,
2002. Proceedings. pp. 166–175. IEEE (2002) 4
3. Boissieux, L., Kiss, G., Thalmann, N.M., Kalra, P.: Simulation of skin aging and
wrinkles with cosmetics insight. In: Computer Animation and Simulation 2000, pp.
15–27. Springer (2000) 4
4. Burt, D.M., Perrett, D.I.: Perception of age in adult caucasian male faces: Com-
puter graphic manipulation of shape and colour information. Proceedings of the
Royal Society of London. Series B: Biological Sciences 259(1355), 137–143 (1995)
4
5. Chen, B.C., Chen, C.S., Hsu, W.H.: Cross-age reference coding for age-invariant
face recognition and retrieval. In: Proceedings of the European Conference on Com-
puter Vision (ECCV) (2014) 10
6. Chen, L.C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution
for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017) 9
7. Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: Stargan: Unified gen-
erative adversarial networks for multi-domain image-to-image translation. In: The
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018) 3,
5, 12
8. Choi, Y., Uh, Y., Yoo, J., Ha, J.W.: Stargan v2: Diverse image synthesis for mul-
tiple domains. arXiv preprint arXiv:1912.01865 (2019) 4
9. Duong, C.N., Luu, K., Quach, K.G., Bui, T.D.: Longitudinal face aging in the
wild-recent deep learning approaches. arXiv preprint arXiv:1802.08726 (2018) 4
10. Fu, Y., Guo, G., Huang, T.S.: Age synthesis and estimation via faces: A survey.
IEEE transactions on pattern analysis and machine intelligence 32(11), 1955–1976
(2010) 4
11. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair,
S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural
information processing systems. pp. 2672–2680 (2014) 4, 8
12. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In:
Proceedings of the IEEE conference on computer vision and pattern recognition.
pp. 770–778 (2016) 6
13. He, Z., Kan, M., Shan, S., Chen, X.: S2gan: Share aging factors across ages and
share aging trends among individuals. In: The IEEE International Conference on
Computer Vision (ICCV) (2019) 1, 4, 10
14. Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance
normalization. In: Proceedings of the IEEE International Conference on Computer
Vision. pp. 1501–1510 (2017) 7
15. Huang, X., Liu, M.Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-
to-image translation. In: Proceedings of the European Conference on Computer
Vision (ECCV). pp. 172–189 (2018) 3, 4, 5, 7
16. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with con-
ditional adversarial networks. In: The IEEE Conference on Computer Vision and
Pattern Recognition (CVPR) (2017) 4
17. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for
improved quality, stability, and variation. In: International Conference on Learning
Representations (2018) 7, 9
16 Or-El R. et al.

18. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative
adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition. pp. 4401–4410 (2019) 3, 7, 8, 9, 13
19. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing
and improving the image quality of StyleGAN. CoRR abs/1912.04958 (2019) 3,
5, 7
20. Kemelmacher-Shlizerman, I., Suwajanakorn, S., Seitz, S.M.: Illumination-aware age
progression. In: The IEEE Conference on Computer Vision and Pattern Recogni-
tion (CVPR) (2014) 1, 4
21. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980 (2014) 8
22. Lanitis, A., Taylor, C.J., Cootes, T.F.: Toward automatic simulation of aging effects
on face images. IEEE Transactions on Pattern Analysis and Machine Intelligence
24(4), 442–455 (2002) 4, 5
23. Lee, C.H., Liu, Z., Wu, L., Luo, P.: Maskgan: Towards diverse and interactive facial
image manipulation. arXiv preprint arXiv:1907.11922 (2019) 9
24. Lee, H.Y., Tseng, H.Y., Huang, J.B., Singh, M., Yang, M.H.: Diverse image-to-
image translation via disentangled representations. In: Proceedings of the Euro-
pean Conference on Computer Vision (ECCV). pp. 35–51 (2018) 4, 5
25. Li, P., Hu, Y., Li, Q., He, R., Sun, Z.: Global and local consistent age generative
adversarial networks. arXiv preprint arXiv:1801.08390 (2018) 4
26. Liu, M., Ding, Y., Xia, M., Liu, X., Ding, E., Zuo, W., Wen, S.: Stgan: A unified
selective transfer network for arbitrary image attribute editing. In: Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3673–3682
(2019) 3, 4, 12
27. Liu, M.Y., Huang, X., Mallya, A., Karras, T., Aila, T., Lehtinen, J., Kautz, J.: Few-
shot unsupervised image-to-image translation. arXiv preprint arXiv:1905.01723
(2019) 3, 4, 5, 7
28. Liu, S., Shi, J., Liang, J., Yang, M.H.: Face parsing via recurrent propagation.
arXiv preprint arXiv:1708.01936 (2017) 4
29. Liu, Y., Li, Q., Sun, Z.: Attribute-aware face aging with wavelet-based generative
adversarial networks. In: The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) (2019) 4
30. Mescheder, L., Geiger, A., Nowozin, S.: Which training methods for gans do ac-
tually converge? In: International Conference on Machine learning (ICML) (2018)
8
31. Nhan Duong, C., Gia Quach, K., Luu, K., Le, N., Savvides, M.: Temporal non-
volume preserving approach to facial age-progression and age-invariant face recog-
nition. In: The IEEE International Conference on Computer Vision (ICCV) (2017)
1, 4
32. Nhan Duong, C., Luu, K., Gia Quach, K., Bui, T.D.: Longitudinal face modeling
via temporal deep restricted boltzmann machines. In: Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition. pp. 5772–5780 (2016) 4
33. Ramanathan, N., Chellappa, R.: Modeling age progression in young faces. In: Com-
puter Vision and Pattern Recognition, 2006 IEEE Computer Society Conference
on. vol. 1, pp. 387–394. IEEE (2006) 4
34. Ramanathan, N., Chellappa, R.: Modeling shape and textural variations in aging
faces. In: Automatic Face & Gesture Recognition, 2008. FG’08. 8th IEEE Interna-
tional Conference on. pp. 1–8. IEEE (2008) 4
Lifespan Age Transformation Synthesis 17

35. Ramanathan, N., Chellappa, R., Biswas, S.: Computational methods for modeling
facial aging: A survey. Journal of Visual Languages & Computing 20(3), 131–144
(2009) 4
36. Rowland, D.A., Perrett, D.I.: Manipulating facial appearance through shape and
color. IEEE computer graphics and applications 15(5), 70–76 (1995) 4
37. Shen, Y., Gu, J., Tang, X., Zhou, B.: Interpreting the latent space of gans for
semantic face editing. arXiv preprint arXiv:1907.10786 (2019) 3, 13
38. Shu, X., Tang, J., Lai, H., Liu, L., Yan, S.: Personalized age progression with aging
dictionary. In: Proceedings of the IEEE International Conference on Computer
Vision. pp. 3970–3978 (2015) 4
39. Suo, J., Chen, X., Shan, S., Gao, W., Dai, Q.: A concatenational graph evolu-
tion aging model. IEEE transactions on pattern analysis and machine intelligence
34(11), 2083–2096 (2012) 4
40. Suo, J., Zhu, S.C., Shan, S., Chen, X.: A compositional and dynamic model for
face aging. IEEE Transactions on Pattern Analysis and Machine Intelligence 32(3),
385–401 (2010) 4
41. Taigman, Y., Polyak, A., Wolf, L.: Unsupervised cross-domain image generation.
ICLR (2017) 5
42. Tiddeman, B., Burt, M., Perrett, D.: Prototyping and transforming facial textures
for perception research. IEEE computer graphics and applications 21(5), 42–50
(2001) 4
43. Wang, W., Cui, Z., Yan, Y., Feng, J., Yan, S., Shu, X., Sebe, N.: Recurrent face
aging. In: Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition. pp. 2378–2386 (2016) 1, 4
44. Wang, Z., Tang, X., Luo, W., Gao, S.: Face aging with identity-preserved condi-
tional generative adversarial networks. In: Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition. pp. 7939–7947 (2018) 1, 4, 10, 11, 12
45. Wu, Y., Thalmann, N.M., Thalmann, D.: A plastic-visco-elastic model for wrinkles
in facial animation and skin aging. In: Fundamentals of Computer Graphics, pp.
201–213. World Scientific (1994) 4
46. Yang, H., Huang, D., Wang, Y., Jain, A.K.: Learning face age progression: A
pyramid architecture of gans. In: The IEEE Conference on Computer Vision and
Pattern Recognition (CVPR) (2018) 1, 4, 10, 11
47. Yang, H., Huang, D., Wang, Y., Wang, H., Tang, Y.: Face aging effect simulation
using hidden factor analysis joint sparse representation. IEEE Transactions on
Image Processing 25(6), 2493–2507 (2016) 4
48. Zhang, Z., Song, Y., Qi, H.: Age progression/regression by conditional adversarial
autoencoder. In: The IEEE Conference on Computer Vision and Pattern Recogni-
tion (CVPR) (2017) 1, 4
49. Zhu, J., Zhao, D., Zhang, B.: Lia: Latently invertible autoencoder with adversarial
learning. arXiv preprint arXiv:1906.08090 (2019) 13
50. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation
using cycle-consistent adversarial networks. In: The IEEE International Conference
on Computer Vision (ICCV) (2017) 4, 5, 8

You might also like