Thanks but this model is garbage
adds tons of compression artefacts all over perfectly clean source images, does not obay prompts. how the beta crew let this through i don't know. flux fill does a far better job.
you should pull this back and re-release when it actually works and doesn't ruin images.
adds tons of compression artefacts all over perfectly clean source images, does not obay prompts. how the beta crew let this through i don't know. flux fill does a far better job.
you should pull this back and re-release when it actually works and doesn't ruin images.
Well, it's working for me as well as for a lot of people.
It's just a question of your abilities.
adds tons of compression artefacts all over perfectly clean source images, does not obay prompts. how the beta crew let this through i don't know. flux fill does a far better job.
you should pull this back and re-release when it actually works and doesn't ruin images.
Editing my original comment because of misinformation in premade workflows...
If you're using ComfyUI do NOT use the stitch node to add multiple reference images. Just connect two reference latent nodes together.
Also make sure that the resolution of your reference images are divisible by 64.
I made a post about it on github if anyone's interested. https://github.com/comfyanonymous/ComfyUI/issues/8711#issuecomment-3015947794
I'm sure my fix isn't perfect, but it works for me so far.
Did you updated comfyui-Kontext to 0.3.3? I thought exactly like so until I found out that I was actually using comfyUI-Kontext 0.3.2..
Are you sure it needs to be divisable by 64, not 32? Looking at the Comfy nodes for Flux Kontext there are constants defines for prefered Image Resolutions
PREFERED_KONTEXT_RESOLUTIONS = [
(672, 1568),
(688, 1504),
(720, 1456),
(752, 1392),
(800, 1328),
(832, 1248),
(880, 1184),
(944, 1104),
(1024, 1024),
(1104, 944),
(1184, 880),
(1248, 832),
(1328, 800),
(1392, 752),
(1456, 720),
(1504, 688),
(1568, 672),
]
Have to agree, its bad. It makes people 1kg fatter every time you run it. It is about as retarded as it gets trying to fix hands, even with inpainting. You try run an image it messed up and it will give you 3 fingers 200 generations in a row, whereas SDXL can fix the same image in 5 generations. Honestly I don't see the fuss with this model, it cant generate anything but photographic work and that's its limit, changing hair, changing clothes, yes it does this well, anything else, forget it, artistic images forget it, fixing artistic images double forget it.
To be quite honest, I still think SDXL with LORAS are better than ALL the FLUX models which are very much overhyped. Im half convinced its designed this way to try push people onto paid subscriptions and get NVIDIA more sales.
It's just a question of your abilities.
Not true. I've sat and done LITERALLY 300 generations in a row trying to get KONTEXT to fix hands. Watching in despair as it gives you three fingers or savage mutations 298 times and then the one or two times it gets it right, its turned the character into a hippo. Don't get me wrong it has its uses, it can change hair and clothes well in simple photographic images where the human body is perfect. It has almost no artistic or pose understand what so ever and any image which is remotely artistic it will screw up 100% of the time. SDXL with LORAS is faster, better, more stable, and smarter and thats just direct experience, I despair with KOINTEXT 99% of the time im using it.
I have no idea why this model works well enough for me and not for other people.