Navanit-AI commited on
Commit
1fa3f3c
·
verified ·
1 Parent(s): b5c939d

Small changes in readme for Vllm serve

Browse files

The vllm serve has become simple and easy after so long. So kindly update this readme now.

Files changed (1) hide show
  1. README.md +1 -4
README.md CHANGED
@@ -94,10 +94,7 @@ transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-120b
94
  vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
95
 
96
  ```bash
97
- uv pip install --pre vllm==0.10.1+gptoss \
98
- --extra-index-url https://wheels.vllm.ai/gpt-oss/ \
99
- --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
100
- --index-strategy unsafe-best-match
101
 
102
  vllm serve openai/gpt-oss-120b
103
  ```
 
94
  vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
95
 
96
  ```bash
97
+ uv pip install vllm
 
 
 
98
 
99
  vllm serve openai/gpt-oss-120b
100
  ```