File size: 7,322 Bytes
f11873c
 
 
 
 
 
c71caa9
f11873c
 
 
 
5b2b929
4b1b4ef
4a9e97b
f11873c
3102867
f11873c
 
64c5e6b
f11873c
 
 
 
 
 
 
7ec39c5
f11873c
ef9e4f6
 
 
 
 
 
 
61bb232
ef9e4f6
 
488b6a0
 
ef9e4f6
 
 
 
 
 
 
8dc3ac4
 
 
 
 
f11873c
8dc3ac4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96fdba4
9e33c89
8dc3ac4
 
 
 
 
f11873c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
---
license: cc-by-nc-2.0
language:
- en
base_model:
- mistralai/Ministral-8B-Instruct-2410
base_model_relation: finetune
pipeline_tag: text-generation
library_name: transformers
tags:
- alignment
- conversational
- conversational-ai
- collaborate
- chat
- chatbot
- research
- persona
- personality
- friendly
- reasoning
- chatbot
- vanta-research
- LLM
- collaborative-ai
- frontier
- reflective
---

<div align="center">

![vanta_trimmed](https://cdn-uploads.huggingface.co/production/uploads/686c460ba3fc457ad14ab6f8/hcGtMtCIizEZG_OuCvfac.png)
  
  <h1>VANTA Research</h1>
    
  <p><strong>Independent AI research lab building safe, resilient language models optimized for human-AI collaboration</strong></p>
  
  <p>
    <a href="https://vantaresearch.xyz"><img src="https://img.shields.io/badge/Website-vantaresearch.xyz-black" alt="Website"/></a>
    <a href="https://merch.vantaresearch.xyz"><img src="https://img.shields.io/badge/Merch-merch.vantaresearch.xyz-sage" alt="Merch"/></a>
    <a href="https://x.com/vanta_research"><img src="https://img.shields.io/badge/@vanta_research-1DA1F2?logo=x" alt="X"/></a>
    <a href="https://github.com/vanta-research"><img src="https://img.shields.io/badge/GitHub-vanta--research-181717?logo=github" alt="GitHub"/></a>
  </p>
</div>

---

# Atom v1 8B Preview

**Developed by VANTA Research**

Atom v1 8B Preview is a fine-tuned language model designed to serve as a collaborative thought partner. Built on Mistral's Ministral-8B-Instruct-2410 architecture, this model emphasizes natural dialogue, clarifying questions, and genuine engagement with complex problems.
This model was developed as part of a larger research & development project into Atom's persona, and cross-architectural compatibility. 

## Model Details

- **Model Type:** Causal language model (decoder-only transformer)
- **Base Model:** mistralai/Ministral-8B-Instruct-2410
- **Parameters:** 8 billion
- **Training Method:** Low-Rank Adaptation (LoRA) fine-tuning
- **License:** CC BY-NC 4.0 (Non-Commercial Use)
- **Language:** English
- **Developed by:** VANTA Research, Portland, Oregon

## Intended Use

Atom v1 8B Preview is designed for:

- Collaborative problem-solving and brainstorming
- Technical explanations with accessible analogies
- Code assistance and algorithmic reasoning
- Exploratory conversations that prioritize understanding over immediate answers
- Educational contexts requiring thoughtful dialogue

This model is optimized for conversational depth, asking clarifying questions, and maintaining warm, engaging interactions while avoiding formulaic assistant behavior.

## Training Data

The model was fine-tuned on a curated dataset comprising:

- Identity and persona examples emphasizing collaborative exploration
- Technical reasoning and coding challenges
- Multi-step problem-solving scenarios
- Conversational examples demonstrating warmth and curiosity
- Advanced coding tasks and algorithmic thinking

Training focused on developing a distinctive voice that balances technical competence with genuine engagement.

## Performance Characteristics

Atom v1 8B demonstrates strong capabilities in:

- **Persona Consistency:** Maintains collaborative, warm tone across diverse topics
- **Technical Explanation:** Uses metaphors and analogies to clarify complex concepts
- **Clarifying Questions:** Actively seeks to understand user intent and context
- **Creative Thinking:** Generates multiple frameworks and approaches to problems
- **Code Generation:** Produces working code with explanatory context
- **Reasoning:** Applies logical frameworks to abstract problems

## Limitations

- **Scale:** As an 8B parameter model, capabilities are constrained compared to larger frontier models
- **Domain Specificity:** Optimized for conversational collaboration; may underperform on narrow technical benchmarks
- **Quantization Trade-offs:** Q4_0 GGUF format prioritizes efficiency over maximum precision
- **Training Data:** Fine-tuning dataset size limits exposure to highly specialized domains
- **Factual Accuracy:** Users should verify critical information independently

## Ethical Considerations

This model is released for research and non-commercial applications. Users should:

- Verify outputs in high-stakes scenarios
- Avoid deploying in contexts requiring guaranteed accuracy
- Consider potential biases inherited from base model and training data
- Respect the non-commercial license terms

## Usage

### Hugging Face Transformers

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "vanta-research/atom-v1-8b-preview"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")

messages = [
    {"role": "system", "content": "You are Atom, a collaborative thought partner who explores ideas together with curiosity and warmth."},
    {"role": "user", "content": "Can you explain how gradient descent works?"}
]

input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
output = model.generate(input_ids, max_new_tokens=512, temperature=0.8)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```

### Ollama (GGUF)

The repository includes `atom-ministral-8b-q4_0.gguf` for efficient local inference:

```bash
# Create Modelfile
cat > Modelfile << 'EOF'
FROM ./atom-ministral-8b-q4_0.gguf

TEMPLATE """{{- if .System }}<s>[INST] <<SYS>>
{{ .System }}
<<SYS>>

{{ .Prompt }}[/INST]{{ else }}<s>[INST]{{ .Prompt }}[/INST]{{ end }}{{ .Response }}</s>
"""

PARAMETER stop "</s>"
PARAMETER temperature 0.8
PARAMETER top_p 0.9
PARAMETER top_k 40

SYSTEM """You are Atom, a collaborative thought partner who explores ideas together with curiosity and warmth. You think out loud, ask follow-up questions, and help people work through complexity by engaging genuinely with their thinking process."""
EOF

# Register with Ollama
ollama create atom-v1-8b:latest -f Modelfile

# Run inference
ollama run atom-v1-8b:latest "What's a creative way to visualize time-series data?"
```

## Technical Specifications

- **Architecture:** Mistral-based transformer with Grouped Query Attention
- **Context Length:** 32,768 tokens
- **Vocabulary Size:** 131,072 tokens
- **Attention Heads:** 32 (8 key-value heads)
- **Hidden Dimension:** 4,096
- **Intermediate Size:** 12,288
- **LoRA Configuration:** r=16, alpha=32, targeting attention and MLP layers
- **Training:** 258 steps with bf16 precision and gradient checkpointing

## Citation

```bibtex
@software{atom_v1_8b_preview,
  title = {Atom v1 8B Preview},
  author = {VANTA Research},
  year = {2025},
  url = {https://huggingface.co/vanta-research/atom-v1-8b-preview},
  license = {CC-BY-NC-4.0}
}
```

## License

This model is released under the **Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)**.

You are free to:
- Share and adapt the model for non-commercial purposes
- Attribute VANTA Research as the creator

You may not:
- Use this model for commercial purposes without explicit permission

## Contact

For questions, collaboration inquiries, or commercial licensing:
- **Email:** [email protected]


---

**Version:** 1.0.0-preview  
**Release Date:** November 2025  
**Status:** Preview release for research and evaluation