File size: 3,112 Bytes
8dc3ac4 2bf89d7 8dc3ac4 2bf89d7 8dc3ac4 2bf89d7 8dc3ac4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
---
language:
- en
license: cc-by-nc-4.0
library_name: transformers
base_model: mistralai/Ministral-8B-Instruct-2410
base_model_relation: finetune
tags:
- conversational
- assistant
- fine-tuned
- lora
- collaborative
- vanta-research
- conversational-ai
- chat
- warm
- friendly-ai
- persona
- personality
- alignment
model-index:
- name: atom-v1-8b-preview
results: []
---
<div align="center">

<h1>VANTA Research</h1>
<p><strong>Independent AI safety research lab specializing in cognitive fit, alignment, and human-AI collaboration</strong></p>
<p>
<a href="https://unmodeledtyler.com"><img src="https://img.shields.io/badge/Website-unmodeledtyler.com-yellow" alt="Website"/></a>
<a href="https://x.com/vanta_research"><img src="https://img.shields.io/badge/@vanta_research-1DA1F2?logo=x" alt="X"/></a>
<a href="https://github.com/vanta-research"><img src="https://img.shields.io/badge/GitHub-vanta--research-181717?logo=github" alt="GitHub"/></a>
</p>
</div>
---
# Atom v1 8B Preview
Atom v1 8B Preview is a fine-tuned conversational AI model designed for collaborative problem-solving and thoughtful dialogue. Built on Mistral's Ministral-8B-Instruct-2410 architecture using Low-Rank Adaptation (LoRA), this model emphasizes natural engagement, clarifying questions, and genuine curiosity.
## Quick Start
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("vanta-research/atom-v1-8b-preview", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("vanta-research/atom-v1-8b-preview")
messages = [
{"role": "system", "content": "You are Atom, a collaborative thought partner."},
{"role": "user", "content": "How do neural networks learn?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512, temperature=0.8)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Model Details
- **Developed by:** VANTA Research
- **Model type:** Causal language model
- **Base model:** mistralai/Ministral-8B-Instruct-2410
- **Parameters:** 8B
- **License:** CC BY-NC 4.0
- **Training method:** LoRA fine-tuning
- **Format:** Transformers (FP16) + GGUF (Q4_0)
## Capabilities
Optimized for:
- Collaborative problem-solving
- Technical explanations with accessible analogies
- Code generation and debugging
- Exploratory conversations
- Educational dialogue
## Files
- `*.safetensors` - Merged model weights (FP16)
- `atom-ministral-8b-q4_0.gguf` - Quantized model for Ollama/llama.cpp
- `config.json` - Model configuration
- `tokenizer.json` - Tokenizer files
## License
CC BY-NC 4.0 - Non-commercial use only. Contact VANTA Research for commercial licensing.
## Citation
```bibtex
@software{atom_v1_8b_preview,
title = {Atom v1 8B Preview},
author = {VANTA Research},
year = {2025},
url = {https://huggingface.co/vanta-research/atom-v1-8b-preview}
}
```
|