itay1itzhak nielsr HF Staff commited on
Commit
52a766e
·
verified ·
1 Parent(s): d3d806e

Update model card: Refine pipeline tag, license, and add project page (#1)

Browse files

- Update model card: Refine pipeline tag, license, and add project page (b56f665c303cdf321d0473cd96e65e0b7537b16c)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +14 -11
README.md CHANGED
@@ -1,18 +1,20 @@
1
  ---
2
- license: apache-2.0
3
- tags:
4
- - language-modeling
5
- - causal-lm
6
- - bias-analysis
7
- - cognitive-bias
8
  datasets:
9
  - allenai/tulu-v2-sft-mixture
10
  language:
11
  - en
12
- base_model:
13
- - google/t5-v1_1-xxl
14
- pipeline_tag: text2text-generation
15
  library_name: transformers
 
 
 
 
 
 
 
 
 
16
  ---
17
 
18
  # Model Card for T5-Tulu
@@ -23,13 +25,14 @@ library_name: transformers
23
  This 🤗 Transformers model was finetuned using LoRA adapters for the arXiv paper:
24
  **"Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs"**
25
  We study whether cognitive biases in LLMs emerge from pretraining, instruction tuning, or training randomness.
26
- This is one of 3 idnetical versions trained with different random seeds.
27
 
28
  - **Model type**: encoder-decoder based transformer
29
  - **Language(s)**: English
30
- - **License**: Apache 2.0
31
  - **Finetuned from**: `google/t5-v1_1-xxl`
32
  - **Paper**: https://arxiv.org/abs/2507.07186
 
33
  - **Repository**: https://github.com/itay1itzhak/planted-in-pretraining
34
 
35
  ## Uses
 
1
  ---
2
+ base_model:
3
+ - google/t5-v1_1-xxl
 
 
 
 
4
  datasets:
5
  - allenai/tulu-v2-sft-mixture
6
  language:
7
  - en
 
 
 
8
  library_name: transformers
9
+ license: mit
10
+ pipeline_tag: text-generation
11
+ tags:
12
+ - language-modeling
13
+ - causal-lm
14
+ - bias-analysis
15
+ - cognitive-bias
16
+ metrics:
17
+ - accuracy
18
  ---
19
 
20
  # Model Card for T5-Tulu
 
25
  This 🤗 Transformers model was finetuned using LoRA adapters for the arXiv paper:
26
  **"Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs"**
27
  We study whether cognitive biases in LLMs emerge from pretraining, instruction tuning, or training randomness.
28
+ This is one of 3 identical versions trained with different random seeds.
29
 
30
  - **Model type**: encoder-decoder based transformer
31
  - **Language(s)**: English
32
+ - **License**: MIT
33
  - **Finetuned from**: `google/t5-v1_1-xxl`
34
  - **Paper**: https://arxiv.org/abs/2507.07186
35
+ - **Project Page**: https://itay1itzhak.github.io/planted-in-pretraining
36
  - **Repository**: https://github.com/itay1itzhak/planted-in-pretraining
37
 
38
  ## Uses