File size: 469 Bytes
192404a
 
 
1
2
3
4
Currently, our leaderboard doesn't support automatic running of models from the HF Hub through our benchmark—we're working on it! However, you can send a request with the model name, revision, and precision, and we'll run your LLM-as-a-judge and update the leaderboard!

Additionally, you can use our methodology to evaluate models on another open benchmark using the code available in the [repository](https://huggingface.co/spaces/MTSAIR/ru_leaderboard/tree/main).