Commit
·
4621659
1
Parent(s):
76d28af
fix-readme (#109)
Browse files- update readme (e3de54693ba9846e7ba3194ed987ce114291189d)
Co-authored-by: Ella Charlaix <[email protected]>
README.md
CHANGED
|
@@ -146,12 +146,12 @@ pip install optimum[openvino]
|
|
| 146 |
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `OVStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
|
| 147 |
|
| 148 |
```diff
|
| 149 |
-
- from diffusers import
|
| 150 |
-
+ from optimum.intel import
|
| 151 |
|
| 152 |
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
|
| 153 |
-
- pipeline =
|
| 154 |
-
+ pipeline =
|
| 155 |
prompt = "A majestic lion jumping from a big stone at night"
|
| 156 |
image = pipeline(prompt).images[0]
|
| 157 |
```
|
|
@@ -170,12 +170,12 @@ pip install optimum[onnxruntime]
|
|
| 170 |
To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `ORTStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
|
| 171 |
|
| 172 |
```diff
|
| 173 |
-
- from diffusers import
|
| 174 |
-
+ from optimum.onnxruntime import
|
| 175 |
|
| 176 |
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
|
| 177 |
-
- pipeline =
|
| 178 |
-
+ pipeline =
|
| 179 |
prompt = "A majestic lion jumping from a big stone at night"
|
| 180 |
image = pipeline(prompt).images[0]
|
| 181 |
```
|
|
|
|
| 146 |
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `OVStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
|
| 147 |
|
| 148 |
```diff
|
| 149 |
+
- from diffusers import StableDiffusionXLPipeline
|
| 150 |
+
+ from optimum.intel import OVStableDiffusionXLPipeline
|
| 151 |
|
| 152 |
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
|
| 153 |
+
- pipeline = StableDiffusionXLPipeline.from_pretrained(model_id)
|
| 154 |
+
+ pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id)
|
| 155 |
prompt = "A majestic lion jumping from a big stone at night"
|
| 156 |
image = pipeline(prompt).images[0]
|
| 157 |
```
|
|
|
|
| 170 |
To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `ORTStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
|
| 171 |
|
| 172 |
```diff
|
| 173 |
+
- from diffusers import StableDiffusionXLPipeline
|
| 174 |
+
+ from optimum.onnxruntime import ORTStableDiffusionXLPipeline
|
| 175 |
|
| 176 |
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
|
| 177 |
+
- pipeline = StableDiffusionXLPipeline.from_pretrained(model_id)
|
| 178 |
+
+ pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id)
|
| 179 |
prompt = "A majestic lion jumping from a big stone at night"
|
| 180 |
image = pipeline(prompt).images[0]
|
| 181 |
```
|