Improve model card: Update pipeline tag, add project page and HF paper link (#1)
Browse files- Improve model card: Update pipeline tag, add project page and HF paper link (d052a70174f2ad602357e83f856b45e38a979a98)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
|
@@ -1,18 +1,18 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
license: apache-2.0
|
|
|
|
| 3 |
tags:
|
| 4 |
- language-modeling
|
| 5 |
- causal-lm
|
| 6 |
- bias-analysis
|
| 7 |
- cognitive-bias
|
| 8 |
-
datasets:
|
| 9 |
-
- allenai/tulu-v2-sft-mixture
|
| 10 |
-
language:
|
| 11 |
-
- en
|
| 12 |
-
base_model:
|
| 13 |
-
- google/t5-v1_1-xxl
|
| 14 |
-
pipeline_tag: text2text-generation
|
| 15 |
-
library_name: transformers
|
| 16 |
---
|
| 17 |
|
| 18 |
# Model Card for T5-Tulu
|
|
@@ -20,8 +20,8 @@ library_name: transformers
|
|
| 20 |
## Model Details
|
| 21 |
|
| 22 |
**Model Description**
|
| 23 |
-
This 🤗 Transformers model was finetuned using LoRA adapters for the
|
| 24 |
-
**"Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs"**
|
| 25 |
We study whether cognitive biases in LLMs emerge from pretraining, instruction tuning, or training randomness.
|
| 26 |
This is one of 3 identical versions trained with different random seeds.
|
| 27 |
|
|
@@ -30,6 +30,7 @@ This is one of 3 identical versions trained with different random seeds.
|
|
| 30 |
- **License**: Apache 2.0
|
| 31 |
- **Finetuned from**: `google/t5-v1_1-xxl`
|
| 32 |
- **Paper**: https://arxiv.org/abs/2507.07186
|
|
|
|
| 33 |
- **Repository**: https://github.com/itay1itzhak/planted-in-pretraining
|
| 34 |
|
| 35 |
## Uses
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model:
|
| 3 |
+
- google/t5-v1_1-xxl
|
| 4 |
+
datasets:
|
| 5 |
+
- allenai/tulu-v2-sft-mixture
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
+
library_name: transformers
|
| 9 |
license: apache-2.0
|
| 10 |
+
pipeline_tag: text-generation
|
| 11 |
tags:
|
| 12 |
- language-modeling
|
| 13 |
- causal-lm
|
| 14 |
- bias-analysis
|
| 15 |
- cognitive-bias
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
---
|
| 17 |
|
| 18 |
# Model Card for T5-Tulu
|
|
|
|
| 20 |
## Model Details
|
| 21 |
|
| 22 |
**Model Description**
|
| 23 |
+
This 🤗 Transformers model was finetuned using LoRA adapters for the paper:
|
| 24 |
+
**"Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs"** ([Hugging Face Paper](https://huggingface.co/papers/2507.07186), [arXiv](https://arxiv.org/abs/2507.07186))
|
| 25 |
We study whether cognitive biases in LLMs emerge from pretraining, instruction tuning, or training randomness.
|
| 26 |
This is one of 3 identical versions trained with different random seeds.
|
| 27 |
|
|
|
|
| 30 |
- **License**: Apache 2.0
|
| 31 |
- **Finetuned from**: `google/t5-v1_1-xxl`
|
| 32 |
- **Paper**: https://arxiv.org/abs/2507.07186
|
| 33 |
+
- **Project Page**: https://itay1itzhak.github.io/planted-in-pretraining
|
| 34 |
- **Repository**: https://github.com/itay1itzhak/planted-in-pretraining
|
| 35 |
|
| 36 |
## Uses
|