Skip to content

torch.compile errors on vae.encode #10937

@Luciennnnnnn

Description

@Luciennnnnnn

Describe the bug

torch.compile fails at compiling vae.encode while is ok at compiling vae.

By removing @apply_forward_hook, it works.

Reproduction

import torch
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL

vae = AutoencoderKL.from_pretrained(
    'stable-diffusion-v1-5/stable-diffusion-v1-5',
    subfolder='vae', # use vae_ema?
    local_files_only=False,
).to('cuda')
vae = torch.compile(vae, mode="max-autotune", fullgraph=True) # ok
a = torch.randn(1, 3, 256, 256, device='cuda')
vae(a)

vae = AutoencoderKL.from_pretrained(
    'stable-diffusion-v1-5/stable-diffusion-v1-5',
    subfolder='vae', # use vae_ema?
    local_files_only=False,
).to('cuda')
vae.encode = torch.compile(vae.encode, mode="max-autotune", fullgraph=True) # error
a = torch.randn(1, 3, 256, 256, device='cuda')
vae.encode(a)

Logs

  File "/home/sist/luoxin/.conda/envs/py3.10+pytorch2.4+cu121/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 433, in _fn
    return fn(*args, **kwargs)
  File "/home/sist/luoxin/.conda/envs/py3.10+pytorch2.4+cu121/lib/python3.10/site-packages/diffusers/utils/accelerate_utils.py", line 43, in wrapper
    def wrapper(self, *args, **kwargs):
NameError: name 'torch' is not defined

System Info

  • 🤗 Diffusers version: 0.32.2
  • Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
  • Running on Google Colab?: No
  • Python version: 3.10.15
  • PyTorch version (GPU?): 2.4.1+cu121 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.26.2
  • Transformers version: 4.46.2
  • Accelerate version: 1.0.1
  • PEFT version: 0.13.2
  • Bitsandbytes version: 0.44.1
  • Safetensors version: 0.4.5
  • xFormers version: 0.0.28
  • Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB
    NVIDIA A100-SXM4-80GB, 81920 MiB
    NVIDIA A100-SXM4-80GB, 81920 MiB
    NVIDIA A100-SXM4-80GB, 81920 MiB
    NVIDIA A100-SXM4-80GB, 81920 MiB
    NVIDIA A100-SXM4-80GB, 81920 MiB
    NVIDIA A100-SXM4-80GB, 81920 MiB
    NVIDIA A100-SXM4-80GB, 81920 MiB
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Who can help?

@sayakpaul @DN6 @yiyixuxu

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingstaleIssues that haven't received updates

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions