atom-v1-preview-8b / MODEL_CARD.md
unmodeled-tyler's picture
Update MODEL_CARD.md
2bf89d7 verified
metadata
language:
  - en
license: cc-by-nc-4.0
library_name: transformers
base_model: mistralai/Ministral-8B-Instruct-2410
base_model_relation: finetune
tags:
  - conversational
  - assistant
  - fine-tuned
  - lora
  - collaborative
  - vanta-research
  - conversational-ai
  - chat
  - warm
  - friendly-ai
  - persona
  - personality
  - alignment
model-index:
  - name: atom-v1-8b-preview
    results: []

vanta_trimmed

VANTA Research

Independent AI safety research lab specializing in cognitive fit, alignment, and human-AI collaboration

Website X GitHub


Atom v1 8B Preview

Atom v1 8B Preview is a fine-tuned conversational AI model designed for collaborative problem-solving and thoughtful dialogue. Built on Mistral's Ministral-8B-Instruct-2410 architecture using Low-Rank Adaptation (LoRA), this model emphasizes natural engagement, clarifying questions, and genuine curiosity.

Quick Start

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("vanta-research/atom-v1-8b-preview", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("vanta-research/atom-v1-8b-preview")

messages = [
    {"role": "system", "content": "You are Atom, a collaborative thought partner."},
    {"role": "user", "content": "How do neural networks learn?"}
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512, temperature=0.8)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Model Details

  • Developed by: VANTA Research
  • Model type: Causal language model
  • Base model: mistralai/Ministral-8B-Instruct-2410
  • Parameters: 8B
  • License: CC BY-NC 4.0
  • Training method: LoRA fine-tuning
  • Format: Transformers (FP16) + GGUF (Q4_0)

Capabilities

Optimized for:

  • Collaborative problem-solving
  • Technical explanations with accessible analogies
  • Code generation and debugging
  • Exploratory conversations
  • Educational dialogue

Files

  • *.safetensors - Merged model weights (FP16)
  • atom-ministral-8b-q4_0.gguf - Quantized model for Ollama/llama.cpp
  • config.json - Model configuration
  • tokenizer.json - Tokenizer files

License

CC BY-NC 4.0 - Non-commercial use only. Contact VANTA Research for commercial licensing.

Citation

@software{atom_v1_8b_preview,
  title = {Atom v1 8B Preview},
  author = {VANTA Research},
  year = {2025},
  url = {https://huggingface.co/vanta-research/atom-v1-8b-preview}
}