psynote123 commited on
Commit
9eb8ae0
·
verified ·
1 Parent(s): cf4919d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -3
README.md CHANGED
@@ -114,7 +114,7 @@ print(f"# A:\n{output}\n")
114
  ```
115
 
116
  __System requirements:__
117
- * GPUs: H100, L40s
118
  * CPU: AMD, Intel
119
  * Python: 3.10-3.12
120
 
@@ -123,10 +123,18 @@ To work with our models just run these lines in your terminal:
123
 
124
  ```shell
125
  pip install thestage
126
- pip install thestage_elastic_models[nvidia]
127
-
128
  pip install flash_attn==2.7.3 --no-build-isolation
 
 
 
 
 
 
 
 
129
  pip uninstall apex
 
130
  ```
131
 
132
  Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
 
114
  ```
115
 
116
  __System requirements:__
117
+ * GPUs: H100, L40s, 5090, 4090
118
  * CPU: AMD, Intel
119
  * Python: 3.10-3.12
120
 
 
123
 
124
  ```shell
125
  pip install thestage
126
+ pip install 'thestage-elastic-models[nvidia]'
 
127
  pip install flash_attn==2.7.3 --no-build-isolation
128
+
129
+ # or for blackwell support
130
+ pip install 'thestage-elastic-models[blackwell]'
131
+ pip install torch==2.7.0+cu128 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
132
+ # please download the appropriate version of Wheels for your system from https://github.com/Zarrac/flashattention-blackwell-wheels-whl-ONLY-5090-5080-5070-5060-flash-attention-/releases/tag/FlashAttention
133
+ mv flash_attn-2.7.4.post1-rtx5090-torch2.7.0cu128cxx11abiTRUE-cp311-linux_x86_64.whl flash_attn-2.7.4.post1-0rtx5090torch270cu128cxx11abiTRUE-cp311-cp311-linux_x86_64.whl
134
+ pip install flash_attn-2.7.4.post1-0rtx5090torch270cu128cxx11abiTRUE-cp311-cp311-linux_x86_64.whl
135
+
136
  pip uninstall apex
137
+
138
  ```
139
 
140
  Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows: