File size: 2,367 Bytes
deeee0a
9aea281
 
 
 
 
5eb7dea
9aea281
5eb7dea
 
 
 
 
deeee0a
 
5eb7dea
deeee0a
 
 
5eb7dea
 
 
 
 
deeee0a
5eb7dea
 
 
 
f297754
9aea281
5eb7dea
deeee0a
 
 
 
5eb7dea
deeee0a
 
5eb7dea
deeee0a
 
 
5eb7dea
 
deeee0a
64146d1
 
deeee0a
5eb7dea
 
 
 
deeee0a
5eb7dea
deeee0a
5eb7dea
 
 
 
 
 
 
deeee0a
 
 
5eb7dea
 
 
deeee0a
 
 
5eb7dea
 
deeee0a
5eb7dea
deeee0a
5eb7dea
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
base_model:
- google/t5-v1_1-xxl
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text2text-generation
tags:
- language-modeling
- causal-lm
- bias-analysis
- cognitive-bias
---

# Model Card for T5-Flan

## Model Details

**Model Description**  
This 🤗 Transformers model was finetuned using LoRA adapters for the arXiv paper:  
**"Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs"**  
We study whether cognitive biases in LLMs emerge from pretraining, instruction tuning, or training randomness.
This is one of 3 identical versions trained with different random seeds.

- **Model type**: Causal decoder-based transformer
- **Language(s)**: English
- **License**: Apache 2.0
- **Finetuned from**: `google/t5-v1_1-xxl`
- **Paper**: https://arxiv.org/abs/2507.07186
- **Project Page**: https://itay1itzhak.github.io/planted-in-pretraining
- **Repository**: https://github.com/itay1itzhak/planted-in-pretraining

## Uses

### Direct Use
For research on cognitive biases in LLMs. Used to test causal impact of pretraining vs instruction tuning.

### Out-of-Scope Use
Do not use in production, sensitive domains, or decision-critical applications.

## How to Get Started with the Model

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("itay1itzhak/T5-Flan-Seed-0")
tokenizer = AutoTokenizer.from_pretrained("itay1itzhak/T5-Flan-Seed-0")

inputs = tokenizer("Example input?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```

## Training Details

- Finetuning method: LoRA (high-rank, rank ∈ [64, 512])
- Instruction data: Flan (350K)
- Seeds: 3 per setting to evaluate randomness effects
- Batch size: 128 (OLMo) / 64 (T5)
- Learning rate: 1e-6 to 1e-3
- Steps: ~5.5k (OLMo) / ~16k (T5)
- Mixed precision: fp16 (OLMo) / bf16 (T5)

## Evaluation

- Evaluated on 32 cognitive biases from Itzhak et al. (2024) and Malberg et al. (2024)
- Metrics: mean bias score, PCA clustering, MMLU accuracy
- Findings: Biases primarily originate in pretraining; randomness introduces moderate variation

## Environmental Impact

- Hardware: 4× NVIDIA A40
- Estimated time: ~120 GPU hours/model

## Technical Specifications

- Architecture: T5-11B
- Instruction dataset: Flan (350K)