Open
Description
This is how I use diffusers to load flux model:
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained(
"/ckptstorage/repo/pretrained_weights/black-forest-labs/FLUX.1-dev",
torch_dtype=torch.float16,
)
device = torch.device(f"cuda:{device_number}" if torch.cuda.is_available() else "cpu")
pipe = pipe.to(device)
it consumes about 75 seconds on my computer with A800 GPU.
But I found in comfyui, it only need 22 seconds to load flux model, but it load the fp8 model.
Can diffusers load flux fp8 model ?
or is there any other speed up method ?
Metadata
Metadata
Assignees
Labels
No labels