Use `attn_implementation` instead of `_attn_implementation`

#21
by qubvel-hf HF Staff - opened
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -84,7 +84,7 @@ processor = AutoProcessor.from_pretrained(model_path)
84
  model = AutoModelForImageTextToText.from_pretrained(
85
  model_path,
86
  torch_dtype=torch.bfloat16,
87
- _attn_implementation="flash_attention_2"
88
  ).to("cuda")
89
  ```
90
 
 
84
  model = AutoModelForImageTextToText.from_pretrained(
85
  model_path,
86
  torch_dtype=torch.bfloat16,
87
+ attn_implementation="flash_attention_2"
88
  ).to("cuda")
89
  ```
90