MV-VTON: Multi-View Virtual Try-On With Diffusion Models
MV-VTON: Multi-View Virtual Try-On With Diffusion Models
Haoyu Wang1,2, * , Zhilu Zhang2,† Donglin Di3 , Shiliang Zhang1 , Wangmeng Zuo2
1
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University
2
Harbin Institute of Technology
3
Space AI, Li Auto
[email protected], [email protected], [email protected], [email protected], [email protected]
Method
Preliminaries for Diffusion Models
Diffusion Models (Ho, Jain, and Abbeel 2020; Rombach
(a) Frontal-view (b) Multi-view
et al. 2022) have demonstrated strong capabilities in visual
generation, which transforms a Gaussian distribution into a Figure 2: Comparison between previous datasets and our
target distribution by iterative denoising. In particular, Stable proposed MVG dataset. (a) is the dataset used by the pre-
Diffusion (Rombach et al. 2022) is a widely used generative vious work, which only have clothing and person in the
diffusion model, which consists of a CLIP text encoder ET , a frontal-view. In contrast, our dataset (b) offers images from
VAE encoder E as well as decoder D, and a time-conditional five different views.
denoising model ϵθ . The text encoder ET encodes the in-
put text prompt y as conditional input. The VAE encoder E
compresses the input image I into latent space to get the la- the inpainting mask, and denote by a the masked per-
tent variable z0 = E(I). In contrast, the VAE decoder D son image x. The model concatenates zt (z0 = E(x)),
decodes the output of backbone from latent space to pixel the encoded clothing-agnostic image E(a), and the resized
space. Through the VAE encoder E, at an arbitrary time step clothing-agnostic mask m in the channel dimension, and
t, the forward process is performed: feeds them into the backbone as spatial input. Besides, we
Qt √ √ use an existing method to pre-warp the clothing and paste it
α := s=1 (1 − βs ) , zt = αt z0 + 1 − αt ϵ, (1)
on a. While utilizing CLIP image encoder to encode cloth-
where ϵ ∼ N (0, 1) is the random Gaussian noise and β is ing as the global condition of the diffusion model, we also
a predefined variance schedule. The training objective is to introduce an additional encoder (Zhang, Rao, and Agrawala
acquire a noise prediction network that minimizes the dis- 2023) to encode clothing to provide more refined local con-
parity between the predicted noise and the noise added to ditions. Since both the frontal and back view clothing need
ground truth. The loss function can be defined as, to be encoded, directly sending both into the backbone as
2 conditions may result in confusion of clothing features. To
LLDM = EE(I),y,ϵ∼N (0,1),t [∥ϵ − ϵθ (zt , t, ET (y))∥2 ], (2) alleviate this problem, we propose a view-adaptive selection
where zt represents the encoded image E(I) with random mechanism. Based on the similarity between the poses of
Gaussian noise ϵ ∼ N (0, 1) added. the person and two clothes, it conducts hard-selection when
In our work, we use an exemplar-based inpainting extracting global features and soft-selection when extracting
model (Yang et al. 2023) as a backbone, which employs an local features. To preserve semantic information in clothing
image c rather than texts as the prompt and then encode c and enhance high-frequency details in global features using
by the image encoder EI of CLIP. Thus, the loss function in local ones, we introduce joint attention blocks. They first
Eq. (2) can be modified as, independently align global and local features to the person
ones and then selectively fuse them. Figure 3(a) depicts an
2
LLDM = EE(I),c,ϵ∼N (0,1),t [∥ϵ − ϵθ (zt , t, EI (c))∥2 ]. (3) overview of our proposed method.
Figure 3: (a) Overview of MV-VTON. It encodes frontal and back view clothing into global features using the CLIP image
encoder and extracts multi-scale local features through an additional encoder El . Both features act as conditional inputs for
the decoder of backbone. Besides, both features are selectively extracted through view-adaptive selection mechanism. (b) Soft-
selection modulates the clothing features on frontal and back views, respectively, based on the similarity between the clothing’s
pose and the person’s pose. Then the features from both views are concatenated in the channel dimension.
tween the garments’ pose and the person’s pose. It means Whi and Wfi , respectively. We also map cif to Cfi through a
that we only select one piece of clothing that is closest to linear layer with weights Wci . Then, we calculate the simi-
the person’s pose as the input of the image encoder, since it larity between the person’s pose and frontal-view clothing’s
is enough to cover global semantic information. When gen- pose to get the selection weights of frontal-view clothing,
erating pre-warped clothing for E(a), the selection is also i.e.,
performed. Implementation details of hard-selection can be Phi (Pfi )T
found in the supplementary material. weights = sof tmax( √ ), (4)
d
Soft-Selection for Local Clothing Features. We utilize an where weights represents the selection weights of frontal-
additional encoder El to extract the multi-scale local features view clothing, and d represents the dimension of these ma-
of frontal and back view clothing, which in the i-th scale are trices. Assuming that the person’s pose is biased towards
denoted as cif and cib , respectively. When reconstructing the the front, as depicted in the second column of Figure 2(b),
try-on results, it may be insufficient to rely solely on the the similarity between the person’s pose and the front view
clothing from either frontal or back view under certain spe- clothing’s pose will be higher. Consequently, the corre-
cific scenes, such as the third column shown in Figure 2(b). sponding clothing features will be enhanced by weights,
In these cases, it may be necessary to incorporate clothing and vice versa. The features of back view clothing cib un-
features from both views. However, simply combining the dergo similar processing. Finally, the two selected clothing
two may lead to confusion of features. Instead, we introduce features are concatenated along the channel dimension as the
soft-selection block to modulate their features, respectively, local condition cil of backbone.
as shown in Figure 3(b).
First, the person’s pose ph , frontal-view clothing’s pose Joint Attention Blocks
pf , and back view clothing’s pose pb are encoded by the Global clothing features cg provide identical conditions for
pose encoder Ep to obtain their respective features Ep (ph ), blocks at each scale of U-Net, and multi-scale local clothing
Ep (pf ), and Ep (pb ). Details of the pose encoder can be found features cl allow for reconstructing more accurate details.
in the supplementary material. When processing frontal- We present joint attention blocks to align cg and cl with the
view clothing, in i-th soft-selection block, we map Ep (ph ) current person features, as shown in Figure 4. To retain most
and Ep (pf ) to Phi and Pfi through a linear layer with weights of the semantic information in global features cg , we use
𝑐𝑙i Experiments
K V
Q Self Cross Feed Experiments Settings
Attention Attention Forward
i
𝑓𝑖𝑛 Sharing Sharing
⊕ i
𝑓𝑜𝑢𝑡 Datasets: For the proposed multi-view virtual try-on task,
Weights Weights
Self Cross Feed
we collect MVG dataset containing 1,009 samples. Each
Q Attention Attention Forward sample contains five images of the same person wearing the
K V same garment from five different views, for a total of 5,045
𝑐g
images, as shown in Figure 2(b). The image resolution is
Learnable Fusion Vector about 1K. We’ll explain how the datasets are collected and
⊕ Addition Channel-Wise Multiplication how they’re used for MV-VTON in the supplementary ma-
terial. The proposed method can also be applied to frontal-
Figure 4: Overview of the proposed joint attention blocks. view virtual try-on task. Our frontal-view experiments are
carried out on VITON-HD (Lee et al. 2022) and Dress-
Code (Morelli et al. 2022) datasets. They contain more than
local features cl to refine some lost and erroneous detailed
10,000 frontal-view person and upper-body clothing image
texture information in cg by selective fusion.
pairs. We follow previous work for the use of them.
Specifically, in the i-th joint attention block, we first cal-
i Evaluation Metrics. Following previous works (Kim et al.
culate self-attention for the current features fin . Then, we
2023; Morelli et al. 2023), we use four metrics to eval-
deploy a double cross-attention. The queries (Q) come from
i uate the performance of our method: Structural Similarity
fin and global features cg serve as one set of keys (K) and
(SSIM) (Wang et al. 2004), Learned Perceptual Image Patch
values (V), while local features cil serve as another set of Similarity (LPIPS) (Zhang et al. 2018), Frechet Inception
keys (K) and values (V). After aligning to the person’s pose Distance (FID) (Heusel et al. 2017) and Kernel Inception
through cross-attention, the clothing features cg and cil are Distance (KID) (Bińkowski et al. 2018). Specifically, for
selectively fused in channel-wise dimension, i.e., paired test setting, which means directly using the paired
i
Qig (Kgi )T i data in the dataset, we utilize the above four metrics for eval-
fout = sof tmax( √ )Vg + uation. For unpaired test setting, which means that the given
d
(5) garment is different from the garment originally worn by tar-
Qil (Kli )T i get person, we use FID and KID for evaluation, and in order
λ ⊙ sof tmax( √ )Vl ,
d to distinguish them from the paired setting, we named them
where Qig , Kgi , Vgi represent the Q, K, V of global branch, FIDu and KIDu respectively.
Implementation Details. We use Paint by Example (Yang
Qil , Kli , Vli represent the Q, K, V of local branch, λ is the
et al. 2023) as the backbone of our method and copy the
learnable fusion vector, ⊙ represents channel-wise multi-
i weights of its encoder to initialize El . The hyper-parameter
plication, and fout represents the clothing features after se-
λ1 is set to 1e-1, and λperc is set to 1e-4. We train our
lective fusion. By engaging and fusing the global and lo-
model on 2 NVIDIA Tesla A100 GPUs for 40 epochs
cal clothing features, we can enhance the retention of high-
with a batch size of 4 and a learning rate of 1e-5. We
frequency garment details, e.g., texts and patterns.
use AdamW (Loshchilov and Hutter 2017) optimizer with
Training Objectives β1 = 0.9, β2 = 0.999.
Comparison Settings. We compare our method with Paint
As stated in preliminaries, diffusion models learn to generate
By Example (Yang et al. 2023), PF-AFN (Ge et al. 2021b),
images from random Gaussian noise. However, the training
GP-VTON (Xie et al. 2023), LaDI-VTON (Morelli et al.
objective in Eq. (3) is performed in latent space, and does
2023), DCI-VTON (Gou et al. 2023), StableVITON (Kim
not explicitly constrain the generated results in visible im-
et al. 2023) and IDM-VTON (Choi et al. 2024) on both
age space, resulting in slight differences in color from the
frontal-view and multi-view virtual try-on tasks. For multi-
ground truth. To alleviate the problem, we additionally em-
view virtual try-on, we compare these methods on the pro-
ploy ℓ1 loss L1 and perceptual loss (Johnson, Alahi, and Fei-
posed MVG dataset. For the sake of fairness, we fine-tune
Fei 2016) Lperc . The L1 loss is calculated by
the previous methods on the MVG dataset according to its
L1 = ∥x̂ − x∥1 , (6) original training settings. Since previous methods can only
where x̂ is the reconstructed image using Eq. (1). The per- input a single clothing image, we input frontal and back view
ceptual loss is calculated as, clothing respectively and select the best result. For frontal-
5 view virtual try-on, we compare these methods on VITON-
Lperc =
X
∥ϕk (x̂) − ϕk (x)∥1 , (7) HD and DressCode datasets. Following previous works’ set-
tings, the proposed MV-VTON only inputs one frontal-view
k=1
garment during training and inference.
where ϕk represents the k-th layer of VGG (Simonyan and
Zisserman 2014). Totally, the overall training objective can Quantitative Evaluation
be written as,
Table 1 reports the quantitative results on the paired set-
L = LLDM + λ1 L1 + λperc Lperc , (8) ting, and Table 2 shows the unpaired setting’s results. On
where λ1 and λperc are the balancing weights. the multi-view virtual try-on task, as can be seen, thanks
MVG VITON-HD DressCode - Upper Body
Methods Reference
LPIPS↓ SSIM↑ FID↓ KID↓ LPIPS↓ SSIM↑ FID↓ KID↓ LPIPS↓ SSIM↑ FID↓ KID↓
Paint by Example CVPR23 0.120 0.880 54.38 14.95 0.150 0.843 13.78 4.48 0.078 0.899 15.21 4.51
PF-AFN CVPR21 0.139 0.873 49.47 12.81 0.141 0.855 7.76 4.19 0.091 0.902 13.11 6.29
GP-VTON CVPR23 - - - - 0.085 0.889 6.25 0.77 0.236 0.781 19.37 8.07
LaDI-VTON MM23 0.069 0.921 29.14 4.39 0.094 0.872 7.08 1.49 0.063 0.922 11.85 3.20
DCI-VTON MM23 0.062 0.929 25.71 0.95 0.074 0.893 5.52 0.57 0.043 0.937 11.87 1.91
StableVITON CVPR24 0.063 0.929 23.52 0.46 0.073 0.888 6.15 1.34 0.040 0.937 10.18 1.70
IDM-VTON ECCV24 0.095 0.896 34.66 5.33 0.135 0.826 14.36 8.63 0.066 0.912 13.88 5.39
Ours - 0.050 0.936 22.18 0.35 0.069 0.897 5.43 0.49 0.040 0.941 8.26 1.39
Table 1: Quantitative comparison with previous work on paired setting. For multi-view virtual try-on task, we show results on
our proposed MVG dataset. For frontal-view virtual try-on task, we show results on VITON-HD dataset (Lee et al. 2022) and
DressCode dataset (Morelli et al. 2022). The best results have been bolded. Note that all previous works have been finetuned
on our proposed MVG dataset when comparing on multi-view virtual try-on task.
Frontal cloth Back cloth Person Paint By Example PF-AFN LaDI-VTON DCI-VTON StableVITON IDM-VTON Ours
Figure 5: Qualitative comparisons on multi-view virtual try-on task with MVG dataset.
Clothing Person Paint By Example PF-AFN GP-VTON LaDI-VTON DCI-VTON StableVITON IDM-VTON Ours
Figure 6: Qualitative comparisons on frontal-view virtual try-on task with VITON-HD and DressCode datasets.
to the view-adaptive selection mechanism, our method can due to the lack of adaptive selection of clothes, previous
reasonably select clothing features according to the person’s methods have difficulty in generating hoods of the origi-
pose, so it is better than existing methods in various met- nal cloth. Moreover, in the second row, previous methods
rics, especially on LPIPS and SSIM. Furthermore, owing often struggle to maintain fidelity to the original garments.
to joint attention blocks, our approach excels in preserving In contrast, our method effectively addresses the aforemen-
high-frequency details of the original garments across both tioned problems and generates high-fidelity results. We pro-
frontal-view and multi-view virtual try-on scenarios, thus vide more results of multi-view virtual try-on in the supple-
achieving superior performance in these metrics. mentary materials.
Frontal-View Virtual Try-On. As shown in Figure 6, our
Qualitative Evaluation method also demonstrates superior performance over exist-
Multi-View Virtual Try-On. As shown in Figure 5, MV- ing methods on frontal-view virtual try-on task, particularly
VTON generates more realistic multi-view results compared in retaining clothing details. Specifically, our method not
to the previous five methods. Specifically, in the first row, only faithfully generates complex patterns (in the first row),
(w/o) hard-selection Ours (w/o) soft-selection Ours
MVG VITON-HD
Back Frontal
Method
FIDu ↓ KIDu ↓ FIDu ↓ KIDu ↓
Paint by Example 43.79 5.92 17.27 4.56
PF-AFN 47.38 7.04 21.18 6.57
Person
GP-VTON - - 9.11 1.21
LaDI-VTON 36.61 3.39 9.55 1.83
DCI-VTON 36.03 3.79 8.93 1.07
StableVITON 35.85 4.22 9.86 1.09 Figure 7: Visualization of view-adaptive selection’s effect.
IDM-VTON 40.73 5.74 18.27 10.43
Ours 33.44 2.69 8.67 0.78 (w/o) local features (w/o) global features Ours
Clothing
Table 2: Unpaired setting’s quantitative results on our MVG
dataset and VITON-HD dataset. The best results have been
bolded.
Agnostic person
Hard Soft LPIPS↓ SSIM↑ FID↓ KID↓ FIDu ↓ KIDu ↓
× ×
√ 0.068 0.925 25.13 0.77 35.28 3.24
×
√ 0.064 0.928 24.58 0.62 34.67 3.05
√ ×
√ 0.052 0.934 22.18 0.43 33.47 2.74
Clothing
0.050 0.936 22.18 0.35 33.44 2.69
Agnostic person
lection mechanism on MVG dataset.
√
× 0.070 0.896 5.76 0.81 9.15 1.09
√ √
0.069 0.897 5.43 0.49 8.67 0.78 joint attention blocks, we discard the global feature extrac-
tion branch and the local feature extraction branch respec-
Table 4: Ablation study of joint attention blocks on MVG tively. Results are shown in Table 4 and Figure 8. As can
and VITON-HD datasets. be seen, relying solely on global features may lead to loss
of details, such as the distorted text ’VANS’ in the first row
and the missing letter ’C’ in the second row. Moreover, if
but also better preserves the literal ’Wrangler’ in the cloth- only local features are provided, the results may also have
ing (in the second row). We provide more qualitative com- unfaithful textures, such as artifacts on the person’s chest.
parisons in the supplementary materials, as well as dressing Compared to them, we fuse global and local features through
results under complex human pose conditions. joint attention blocks, which can refine details in garments
while preserving semantic information.
Ablation Studies
Effect of View-Adaptive Selection. We investigate the ef- Conclusion
fect of view-adaptive selection on the multi-view virtual try-
on task. Specifically, no hard-selection represents that we di- We introduce a novel and practical Multi-View Virtual Try-
rectly concatenate two garments’ features encoded by CLIP, ON (MV-VTON) task, which aims at using the frontal and
and no soft-selection means that two clothing features are back clothing to reconstruct the dressing results of a per-
concatenated without passing soft-selection blocks. Com- son from multiple views. To achieve the task, we propose a
parison results are shown in Table 3 and Figure 7. As can diffusion-based method. Specifically, the view-adaptive se-
be seen, the performance is greatly reduced without hard- lection mechanism exacts more reasonable clothing features
selection and soft-selection. No hard-selection will confuse based on the similarity between the poses of a person and
two view’s cloth features, as shown by the blurriness of the two clothes. The joint attention block aligns the global and
’POP’ text in Figure 7. In addition, no soft-selection causes local features of the selected clothing to the target person,
the model to lose some cloth information when processing and fuse them. In addition, we collect a multi-view garment
the side view situation, such as the missing white hood and dataset for this task. Extensive experiments demonstrate that
cuffs in Figure 7. the proposed method achieves state-of-the-art performance
Effect of Joint Attention Blocks. In order to demonstrate both on frontal-view and multi-view virtual try-on tasks,
the effectiveness of fusing global and local features through compared with existing methods.
Acknowledgments
Target Person
This work was supported by the National Key R&D Program
of China (2022YFA1004100).
Appendix
IMPLEMENTATION DETAILS
Pose
Hard-Selection. In this section, we present more details
about the proposed hard-selection for global clothing fea-
tures. Specifically, in multi-view virtual try-on task, we use
OpenPose (Cao et al. 2017; Simon et al. 2017; Wei et al.
Selected Garment
2016) to extract the skeleton images of target person, frontal
clothing and back clothing as pose information ph , pf , and
pb , respectively. After that, we decide whether to use frontal-
view clothing or back-view clothing based on the relative
positions of the target person’s left arm and right arm in the
skeleton images. As shown in Figure A, if the right arm ap- Figure A: Visualization of the person and corresponding
pears positioned to the left of the left arm in the skeleton im- poses. We select one of garments based on the relative po-
age (columns one to three in Figure A), frontal-view cloth- sitions of left and right arms in the skeleton image when
ing is chosen; otherwise, back-view clothing is preferred performing hard-selection on the multi-view virtual try-on
(columns four to five in Figure A). In addition, following task.
previous works (Gou et al. 2023; Xie et al. 2023; Morelli
et al. 2023), we adopt an additional warping network (Ge
et al. 2021b; Kim et al. 2023) to obtain the pre-warped cloth. Target Person
to fourth columns depict persons from different views, while Clothing Person Result
the fifth to seventh columns showcase the corresponding try-
on results. As can be seen, our method can generate realistic
dressing-up results of the multi-view person from the given
two views of clothing. Furthermore, our method can retain
the details well on the original clothing (e.g., the buttons in
the fifth row) and generate high-fidelity try-on images even
under occlusion (e.g., hair occlusion in the second row). In
conclusion, the proposed method exhibits outstanding per-
formance on multi-view virtual try-on task.
Complex Human Pose Results on Frontal-View VTON.
In this section, we provide more VTON results under com-
plex human pose conditions in Figure F. It can be seen
that our method can also generate high-fidelity try-on results
even when the target person has a more complex pose.
Comparison Results on Frontal-View VTON. In this sec-
tion, we show more visual comparison results on VITON-
HD (Choi et al. 2021) dataset and DressCode (Morelli et al.
2022) dataset. The previous works include Paint By Ex-
ample (Yang et al. 2023), PF-AFN (Ge et al. 2021b), GP-
VTON (Xie et al. 2023), LaDI-VTON (Morelli et al. 2023), Figure D: Visualization of bad cases on VITON-HD dataset.
DCI-VTON (Gou et al. 2023) and StableVITON (Kim et al.
2023). The results are shown in Figure G. In the first and
second row of Figure G, it can be seen that our method bet-
ter preserves the shape of the original clothing (e.g., the cuff
in the second row), compared to the previous methods. In
addition, our method outperforms previous methods in pre- section, we present more results at 1024×768 resolution on
serving high-frequency details, such as patterns on clothing VITON-HD (Choi et al. 2021) and DressCode (Morelli et al.
in the fourth and sixth rows. Moreover, in contrast to pre- 2022) datasets, as shown in the Figure H. Specifically, we
vious methods, MV-VTON is not constrained by specific utilize the model trained at 512×384 resolution to directly
types of clothing and can achieve highly realistic effects test at 1024×768 resolution. Despite the difference in res-
across a wide range of garment styles (e.g., the garment in olutions between training and testing, our method can also
the third row and the collar in the eighth row). In summary, produce high-fidelity try-on results. For instance, the gener-
our method also has superiority on frontal-view virtual try- ated images can preserve both the intricate patterns and text
on task. adorning the clothing (in the first row) while also effectively
High Resolution Results on Frontal-View VTON. In this maintaining their original shapes (in the last row).
LIMITATIONS
Despite outperforming previous methods on both frontal-
view and multi-view virtual try-on tasks, our method does
not perform well in all cases. Figure D displays some unsat-
isfactory try-on results. As can be seen, although our method
can preserve the shape and texture of original clothing (e.g.,
the ’DIESEL’ text in the first row), it is difficult for it to
fully preserve some smaller or more complex details (e.g.,
the parts circled in red). The reason for this phenomenon
may be that these details are easily lost when inpainting in
latent space. We will try to solve this issue in future work.
Clothing View 1 View 2 View 3 Result 1 Result 2 Result 3
Person
Clothing Clothing