-
Notifications
You must be signed in to change notification settings - Fork 6.2k
Closed
Labels
bugSomething isn't workingSomething isn't workingstaleIssues that haven't received updatesIssues that haven't received updates
Description
Describe the bug
torch.compile fails at compiling vae.encode
while is ok at compiling vae
.
By removing @apply_forward_hook
, it works.
Reproduction
import torch
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
vae = AutoencoderKL.from_pretrained(
'stable-diffusion-v1-5/stable-diffusion-v1-5',
subfolder='vae', # use vae_ema?
local_files_only=False,
).to('cuda')
vae = torch.compile(vae, mode="max-autotune", fullgraph=True) # ok
a = torch.randn(1, 3, 256, 256, device='cuda')
vae(a)
vae = AutoencoderKL.from_pretrained(
'stable-diffusion-v1-5/stable-diffusion-v1-5',
subfolder='vae', # use vae_ema?
local_files_only=False,
).to('cuda')
vae.encode = torch.compile(vae.encode, mode="max-autotune", fullgraph=True) # error
a = torch.randn(1, 3, 256, 256, device='cuda')
vae.encode(a)
Logs
File "/home/sist/luoxin/.conda/envs/py3.10+pytorch2.4+cu121/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 433, in _fn
return fn(*args, **kwargs)
File "/home/sist/luoxin/.conda/envs/py3.10+pytorch2.4+cu121/lib/python3.10/site-packages/diffusers/utils/accelerate_utils.py", line 43, in wrapper
def wrapper(self, *args, **kwargs):
NameError: name 'torch' is not defined
System Info
- 🤗 Diffusers version: 0.32.2
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.15
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.26.2
- Transformers version: 4.46.2
- Accelerate version: 1.0.1
- PEFT version: 0.13.2
- Bitsandbytes version: 0.44.1
- Safetensors version: 0.4.5
- xFormers version: 0.0.28
- Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB - Using GPU in script?:
- Using distributed or parallel set-up in script?:
Who can help?
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingstaleIssues that haven't received updatesIssues that haven't received updates