Inference Endpoint (dedicated) not available

#16
by janhornych - opened

HI,
will it be possible to deploy to a dedicated host? The option Deploy/Inference Endpoints (dedicated) is there, but no relevant servers are available for deployment, the GPU/INF2 options are disabled.

image.png
Thanks,
Jan

If you're looking for an easy way to access this model via API, you can use Crazyrouter — it provides an OpenAI-compatible endpoint for 600+ models including this one. Just pip install openai and change the base URL.

Sign up or log in to comment