Update README.md
Browse files
README.md
CHANGED
|
@@ -55,6 +55,31 @@ AWPortrait-QW is based on the Qwen-Image. It's trained using a training set that
|
|
| 55 |
|
| 56 |
No trigger words are requireds. LoRA recommends a weight of 1.0.
|
| 57 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
## Online Inference
|
| 59 |
|
| 60 |
You can also try this model on [Liblib AI](https://www.liblib.art/modelinfo/cadba03ead404ea3bc1cda3d30803d84?from=feed&versionUuid=52ac709e59be444ba5965bb73493a9db&rankExpId=RVIyX0wyI0VHMTEjRTE3X0wzI0VHMjAjRTMw).
|
|
|
|
| 55 |
|
| 56 |
No trigger words are requireds. LoRA recommends a weight of 1.0.
|
| 57 |
|
| 58 |
+
## Inference
|
| 59 |
+
|
| 60 |
+
```python
|
| 61 |
+
import torch
|
| 62 |
+
from diffusers import QwenImagePipeline
|
| 63 |
+
|
| 64 |
+
pipe = QwenImagePipeline.from_pretrained("Qwen/Qwen-Image", torch_dtype=torch.bfloat16)
|
| 65 |
+
pipe.load_lora_weights("Shakker-Labs/AWPortrait-QW", weight_name="AWPortrait-QW_1.0.safetensors")
|
| 66 |
+
pipe.fuse_lora(lora_scale=1.0)
|
| 67 |
+
pipe.to("cuda")
|
| 68 |
+
|
| 69 |
+
prompt = "Black and white portrait of an Asian woman with dynamic hair movement,wearing a dark jacket against a light background."
|
| 70 |
+
|
| 71 |
+
image = pipe(
|
| 72 |
+
prompt=prompt,
|
| 73 |
+
negative_prompt="blurry, bad faces, bad hands, worst quality, low quality, jpeg artifacts",
|
| 74 |
+
width=1328,
|
| 75 |
+
height=1328,
|
| 76 |
+
num_inference_steps=30,
|
| 77 |
+
true_cfg_scale=4.0,
|
| 78 |
+
generator=torch.Generator(device="cuda").manual_seed(0),
|
| 79 |
+
).images[0]
|
| 80 |
+
image.save(f"example.png")
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
## Online Inference
|
| 84 |
|
| 85 |
You can also try this model on [Liblib AI](https://www.liblib.art/modelinfo/cadba03ead404ea3bc1cda3d30803d84?from=feed&versionUuid=52ac709e59be444ba5965bb73493a9db&rankExpId=RVIyX0wyI0VHMTEjRTE3X0wzI0VHMjAjRTMw).
|