6chan commited on
Commit
ae25610
·
verified ·
1 Parent(s): 2404c31

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: other
5
+ license_name: flux-1-dev-non-commercial-license
6
+ license_link: https://github.com/black-forest-labs/flux/blob/main/model_licenses/LICENSE-FLUX1-dev
7
+ base_model:
8
+ - black-forest-labs/FLUX.1-Kontext-dev
9
+ tags:
10
+ - image-generation
11
+ - flux
12
+ - diffusion-single-file
13
+ pipeline_tag: image-to-image
14
+ ---
15
+ # flux1-kontext-dev-fp8
16
+
17
+ This repository provides FP8-quantized weights for [black-forest-labs/FLUX.1-Kontext-dev](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev). Both E4M3FN and E5M2 FP8 formats are available.
18
+
19
+ ## Model Overview
20
+
21
+ - **Source model:** [black-forest-labs/FLUX.1-Kontext-dev](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev)
22
+ - **Quantization:** All weights are converted to FP8 (float8_e4m3fn and float8_e5m2) for reduced storage and memory usage.
23
+ - **Intended use:** For research and testing on hardware and software stacks supporting FP8 (PyTorch 2.4+, CUDA 12.4+, Hopper/Ada GPUs).
24
+ - **Disclaimer:** FP8 quantization may lead to reduced inference quality. Not all software/hardware supports FP8.
25
+
26
+ ## Key Features
27
+
28
+ 1. **FP8 Quantization:** Weights are provided in both E4M3FN and E5M2 FP8 formats for maximum compatibility and efficiency.
29
+ 2. **Storage & Memory Efficient:** Significantly reduces model file size and GPU memory usage compared to float16/bfloat16.
30
+ 3. **ComfyUI Compatible:** Ready to use with [ComfyUI](https://github.com/comfyanonymous/ComfyUI) and other frameworks supporting FP8.
31
+
32
+ ## Files
33
+
34
+ - flux1-kontext-dev-fp8-e4m3fn.safetensors — FP8 (E4M3FN) quantized weights
35
+ - `flux1-kontext-dev-fp8-e5m2.safetensors` — FP8 (E5M2) quantized weights
36
+
37
+ ## Usage (Recommended: ComfyUI)
38
+
39
+ We recommend using these weights with [ComfyUI](https://github.com/comfyanonymous/ComfyUI) and its FP8 support.
40
+ To use:
41
+
42
+ 1. Place the `.safetensors` file in your ComfyUI models directory.
43
+ 2. Make sure your ComfyUI and PyTorch installation support FP8 (PyTorch 2.4+, CUDA 12.4+, Hopper/Ada GPU).
44
+ 3. Select the FP8 model in your ComfyUI workflow as you would with standard weights.
45
+
46
+ ## Credits
47
+
48
+ - Original model by [black-forest-labs](https://huggingface.co/black-forest-labs)
49
+
50
+ # License
51
+ This model falls under the [FLUX.1 \[dev\] Non-Commercial License](https://github.com/black-forest-labs/flux/blob/main/model_licenses/LICENSE-FLUX1-dev).
52
+
53
+ # Citation
54
+
55
+ ```bib
56
+ @misc{labs2025flux1kontextflowmatching,
57
+ title={FLUX.1 Kontext: Flow Matching for In-Context Image Generation and Editing in Latent Space}, Add commentMore actions
58
+ author={Black Forest Labs and Stephen Batifol and Andreas Blattmann and Frederic Boesel and Saksham Consul and Cyril Diagne and Tim Dockhorn and Jack English and Zion English and Patrick Esser and Sumith Kulal and Kyle Lacey and Yam Levi and Cheng Li and Dominik Lorenz and Jonas Müller and Dustin Podell and Robin Rombach and Harry Saini and Axel Sauer and Luke Smith},
59
+ year={2025},
60
+ eprint={2506.15742},
61
+ archivePrefix={arXiv},
62
+ primaryClass={cs.GR},
63
+ url={https://arxiv.org/abs/2506.15742},
64
+ }
65
+ ```