nielsr HF Staff commited on
Commit
a3eff64
Β·
verified Β·
1 Parent(s): 269b286

Add pipeline tag, library name, paper, and code links to model card

Browse files

This PR enhances the model card for Omni-Reward by:

* Adding `library_name: transformers` to correctly enable the "how to use" widget, as the model's file structure (e.g., `config.json`, `tokenizer_config.json`) indicates compatibility with the Hugging Face Transformers library.
* Setting `pipeline_tag: any-to-any` to accurately reflect its omni-modal capabilities (handling text, image, video, audio, and 3D data) and improve discoverability on the Hub.
* Including a direct link to the paper ([Omni-Reward: Towards Generalist Omni-Modal Reward Modeling with Free-Form Preferences](https://huggingface.co/papers/2510.23451)) and the official GitHub repository (`https://github.com/HongbangYuan/OmniReward`) in the introductory link section.
* Correcting a minor `</a></a>` typo in the existing benchmark link.

These updates improve the model's discoverability and provide users with comprehensive information and usage guidance.

Files changed (1) hide show
  1. README.md +9 -8
README.md CHANGED
@@ -1,18 +1,20 @@
1
  ---
2
- license: cc-by-nc-4.0
3
- datasets:
4
- - jinzhuoran/OmniRewardData
5
  base_model:
6
  - openbmb/MiniCPM-o-2_6
 
 
 
 
 
7
  ---
8
 
9
-
10
-
11
  # Omni-Reward: Towards Generalist Omni-Modal Reward Modeling with Free-Form Preferences
12
 
13
 
14
  <p align="center">
15
- <a href="https://huggingface.co/datasets/HongbangYuan/OmniRewardBench"> πŸ€— Benchmark</a></a> |
 
 
16
  <a href="https://hf.co/datasets/jinzhuoran/OmniRewardData"> πŸ€— Dataset</a> |
17
  <a href="https://hf.co/jinzhuoran/OmniRewardModel"> πŸ€— Model</a> |
18
  <a href="https://omnireward.github.io/"> 🏠 Homepage</a>
@@ -102,5 +104,4 @@ bash scripts/eval_ti2t_tie.sh
102
 
103
  - `--eval_dataset`: Specifies the evaluation dataset (e.g., `omni_t2t`, `omni_t2i`, `omni_t2v`, etc.).
104
 
105
- - `--eval_tie`: Enables w/ Ties evaluation.
106
-
 
1
  ---
 
 
 
2
  base_model:
3
  - openbmb/MiniCPM-o-2_6
4
+ datasets:
5
+ - jinzhuoran/OmniRewardData
6
+ license: cc-by-nc-4.0
7
+ library_name: transformers
8
+ pipeline_tag: any-to-any
9
  ---
10
 
 
 
11
  # Omni-Reward: Towards Generalist Omni-Modal Reward Modeling with Free-Form Preferences
12
 
13
 
14
  <p align="center">
15
+ <a href="https://huggingface.co/papers/2510.23451"> πŸ“š Paper</a> |
16
+ <a href="https://github.com/HongbangYuan/OmniReward"> πŸ’» Code</a> |
17
+ <a href="https://huggingface.co/datasets/HongbangYuan/OmniRewardBench"> πŸ€— Benchmark</a> |
18
  <a href="https://hf.co/datasets/jinzhuoran/OmniRewardData"> πŸ€— Dataset</a> |
19
  <a href="https://hf.co/jinzhuoran/OmniRewardModel"> πŸ€— Model</a> |
20
  <a href="https://omnireward.github.io/"> 🏠 Homepage</a>
 
104
 
105
  - `--eval_dataset`: Specifies the evaluation dataset (e.g., `omni_t2t`, `omni_t2i`, `omni_t2v`, etc.).
106
 
107
+ - `--eval_tie`: Enables w/ Ties evaluation.