Flux.1: The New Reference in Image Generation
Everything about Flux.1 Dev, Schnell, and Pro – the powerful model family from Black Forest Labs.
Table of Contents
01What Is Flux.1?
Flux.1 is a family of text-to-image models developed by Black Forest Labs – founded by former Stability AI researchers. Flux has quickly established itself as one of the best open-source alternatives for image generation and delivers better results than SDXL in many benchmarks.
02Flux.1 Variants
Flux is available in three variants:
- Flux.1 [schnell]: The fastest variant, Apache 2.0 license (completely free). 4 sampling steps are enough for good results. Ideal for fast iterations and prototyping.
- Flux.1 [dev]: The developer variant with better quality than Schnell. Non-commercial license. 20–30 steps recommended. Best balance of quality and speed.
- Flux.1 [pro]: Commercial model, only available via API. Highest quality but not locally usable.
03Technical Details
Flux is based on a hybrid architecture that combines Transformers and Diffusion. The model is significantly larger than SDXL (about 12 GB for the base model). Native resolution: 1024x1024, but flexible aspect ratios are well supported. Flux uses T5-XXL and CLIP-L as text encoders, resulting in excellent prompt understanding.
04VRAM Requirements
Flux is more resource-intensive than SDXL. In FP16, you need at least 16 GB VRAM. With FP8 quantization, 12 GB VRAM is possible. For NF4 quantization, even 8 GB VRAM is sufficient, though with slight quality loss. The GGUF variants allow flexible quantization levels and CPU offloading.
05Flux in ComfyUI
Flux is excellently supported in ComfyUI. You need: the Flux model (Safetensors), the T5-XXL text encoder, and the CLIP-L text encoder. For the optimal workflow, we recommend using our Flux workflows from the ComfyVault Gallery – they are pre-configured and tested.
06Flux vs. SDXL
In direct comparison, Flux offers better prompt fidelity, more natural images, and fewer artifacts. SDXL, on the other hand, has a larger ecosystem of LoRAs and community models, requires less VRAM, and is faster. For maximum quality, we recommend Flux; for maximum flexibility, SDXL.
Hardware Recommendations
The best hardware for local AI generation. Our recommendations based on price-performance and compatibility.
Graphics Cards (GPU)
NVIDIA RTX 3060 12GB
EntryBest entry-level model for local AI. 12 GB VRAM is sufficient for SDXL and small LLMs.
from ~$300NVIDIA RTX 4070 Ti Super 16GB
RecommendedIdeal mid-range GPU. 16 GB VRAM for Flux, SDXL, and medium-sized LLMs.
from ~$800NVIDIA RTX 4090 24GB
High-EndHigh-end GPU for demanding models. 24 GB VRAM for Wan 2.2 14B and large LLMs.
from ~$1,800NVIDIA RTX 5090 32GB
EnthusiastMaximum performance and VRAM. 32 GB for all current and future AI models.
from ~$2,200* Affiliate links: If you purchase through these links, we receive a small commission at no additional cost to you. This helps us keep ComfyVault free.