Update README.md
Browse files
README.md
CHANGED
@@ -15,12 +15,14 @@ license_link: LICENSE.md
|
|
15 |
|
16 |
---
|
17 |
|
|
|
|
|
18 |
# FLUX.1 Kontext-dev X LoRA Experimentation
|
19 |
|
20 |
Highly experimental, will update with more details later.
|
21 |
|
22 |
- 6-8 steps
|
23 |
-
- Euler, SGM Uniform (Recommended, feel free to play around)
|
24 |
|
25 |
## Model Details
|
26 |
|
|
|
15 |
|
16 |
---
|
17 |
|
18 |
+
**Update 7/9/25:** This model is now quantized and implemented in [this example space.](https://huggingface.co/spaces/LPX55/Kontext-Multi_Lightning_4bit-nf4/) Seeing preliminary VRAM usage at around ~10GB with faster inferencing. Will be experimenting with different weights and schedulers to find particularly well-performing libraries.
|
19 |
+
|
20 |
# FLUX.1 Kontext-dev X LoRA Experimentation
|
21 |
|
22 |
Highly experimental, will update with more details later.
|
23 |
|
24 |
- 6-8 steps
|
25 |
+
- <s>Euler, SGM Uniform (Recommended, feel free to play around)</s> Getting mixed results now, feel free to play around and share.
|
26 |
|
27 |
## Model Details
|
28 |
|