codingrobot commited on
Commit
3e9c37e
·
verified ·
1 Parent(s): af98802

Model card auto-generated by SimpleTuner

Browse files
Files changed (1) hide show
  1. README.md +187 -0
README.md ADDED
@@ -0,0 +1,187 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ base_model: "black-forest-labs/FLUX.1-dev"
4
+ tags:
5
+ - flux
6
+ - flux-diffusers
7
+ - text-to-image
8
+ - image-to-image
9
+ - diffusers
10
+ - simpletuner
11
+ - not-for-all-audiences
12
+ - lora
13
+
14
+ - template:sd-lora
15
+ - standard
16
+ pipeline_tag: text-to-image
17
+ inference: true
18
+ widget:
19
+ - text: 'unconditional (blank prompt)'
20
+ parameters:
21
+ negative_prompt: 'blurry, cropped, ugly'
22
+ output:
23
+ url: ./assets/image_0_0.png
24
+ - text: 'a breathtaking anime-style portrait of nikolai, capturing his essence with vibrant colors and expressive features'
25
+ parameters:
26
+ negative_prompt: 'blurry, cropped, ugly'
27
+ output:
28
+ url: ./assets/image_1_0.png
29
+ - text: 'a high-quality, detailed photograph of nikolai as a sous-chef, immersed in the art of culinary creation'
30
+ parameters:
31
+ negative_prompt: 'blurry, cropped, ugly'
32
+ output:
33
+ url: ./assets/image_2_0.png
34
+ - text: 'a lifelike and intimate portrait of nikolai, showcasing his unique personality and charm'
35
+ parameters:
36
+ negative_prompt: 'blurry, cropped, ugly'
37
+ output:
38
+ url: ./assets/image_3_0.png
39
+ - text: 'a cinematic, visually stunning photo of nikolai, emphasizing his dramatic and captivating presence'
40
+ parameters:
41
+ negative_prompt: 'blurry, cropped, ugly'
42
+ output:
43
+ url: ./assets/image_4_0.png
44
+ - text: 'an elegant and timeless portrait of nikolai, exuding grace and sophistication'
45
+ parameters:
46
+ negative_prompt: 'blurry, cropped, ugly'
47
+ output:
48
+ url: ./assets/image_5_0.png
49
+ - text: 'a dynamic and adventurous photo of nikolai, captured in an exciting, action-filled moment'
50
+ parameters:
51
+ negative_prompt: 'blurry, cropped, ugly'
52
+ output:
53
+ url: ./assets/image_6_0.png
54
+ - text: 'a mysterious and enigmatic portrait of nikolai, shrouded in shadows and intrigue'
55
+ parameters:
56
+ negative_prompt: 'blurry, cropped, ugly'
57
+ output:
58
+ url: ./assets/image_7_0.png
59
+ - text: 'a vintage-style portrait of nikolai, evoking the charm and nostalgia of a bygone era'
60
+ parameters:
61
+ negative_prompt: 'blurry, cropped, ugly'
62
+ output:
63
+ url: ./assets/image_8_0.png
64
+ - text: 'an artistic and abstract representation of nikolai, blending creativity with visual storytelling'
65
+ parameters:
66
+ negative_prompt: 'blurry, cropped, ugly'
67
+ output:
68
+ url: ./assets/image_9_0.png
69
+ - text: 'a futuristic and cutting-edge portrayal of nikolai, set against a backdrop of advanced technology'
70
+ parameters:
71
+ negative_prompt: 'blurry, cropped, ugly'
72
+ output:
73
+ url: ./assets/image_10_0.png
74
+ - text: 'A picture of nikolai'
75
+ parameters:
76
+ negative_prompt: 'blurry, cropped, ugly'
77
+ output:
78
+ url: ./assets/image_11_0.png
79
+ ---
80
+
81
+ # simpletuner-lora
82
+
83
+ This is a PEFT LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
84
+
85
+ The main validation prompt used during training was:
86
+ ```
87
+ A picture of nikolai
88
+ ```
89
+
90
+
91
+ ## Validation settings
92
+ - CFG: `3.0`
93
+ - CFG Rescale: `0.0`
94
+ - Steps: `20`
95
+ - Sampler: `FlowMatchEulerDiscreteScheduler`
96
+ - Seed: `42`
97
+ - Resolution: `1024x1024`
98
+ - Skip-layer guidance:
99
+
100
+ Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
101
+
102
+ You can find some example images in the following gallery:
103
+
104
+
105
+ <Gallery />
106
+
107
+ The text encoder **was not** trained.
108
+ You may reuse the base model text encoder for inference.
109
+
110
+
111
+ ## Training settings
112
+
113
+ - Training epochs: 29
114
+ - Training steps: 500
115
+ - Learning rate: 0.0001
116
+ - Learning rate schedule: polynomial
117
+ - Warmup steps: 100
118
+ - Max grad value: 1.0
119
+ - Effective batch size: 1
120
+ - Micro-batch size: 1
121
+ - Gradient accumulation steps: 1
122
+ - Number of GPUs: 1
123
+ - Gradient checkpointing: True
124
+ - Prediction type: flow_matching (extra parameters=['shift=3', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0', 'flux_lora_target=mmdit'])
125
+ - Optimizer: adamw_bf16
126
+ - Trainable parameter precision: Pure BF16
127
+ - Base model precision: `fp8-torchao`
128
+ - Caption dropout probability: 0.1%
129
+
130
+
131
+ - LoRA Rank: 16
132
+ - LoRA Alpha: None
133
+ - LoRA Dropout: 0.1
134
+ - LoRA initialisation style: default
135
+ - LoRA mode: Standard
136
+
137
+
138
+ ## Datasets
139
+
140
+ ### dreambooth-subject
141
+ - Repeats: 0
142
+ - Total number of images: 17
143
+ - Total number of aspect buckets: 1
144
+ - Resolution: 1.048576 megapixels
145
+ - Cropped: False
146
+ - Crop style: None
147
+ - Crop aspect: None
148
+ - Used for regularisation data: No
149
+
150
+
151
+ ## Inference
152
+
153
+
154
+ ```python
155
+ import torch
156
+ from diffusers import DiffusionPipeline
157
+
158
+ model_id = 'black-forest-labs/FLUX.1-dev'
159
+ adapter_id = 'codingrobot/simpletuner-lora'
160
+ pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
161
+ pipeline.load_lora_weights(adapter_id)
162
+
163
+ prompt = "A picture of nikolai"
164
+
165
+
166
+ ## Optional: quantise the model to save on vram.
167
+ ## Note: The model was quantised during training, and so it is recommended to do the same during inference time.
168
+ from optimum.quanto import quantize, freeze, qint8
169
+ quantize(pipeline.transformer, weights=qint8)
170
+ freeze(pipeline.transformer)
171
+
172
+ pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
173
+ model_output = pipeline(
174
+ prompt=prompt,
175
+ num_inference_steps=20,
176
+ generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
177
+ width=1024,
178
+ height=1024,
179
+ guidance_scale=3.0,
180
+ ).images[0]
181
+
182
+ model_output.save("output.png", format="PNG")
183
+
184
+ ```
185
+
186
+
187
+