🚀 Next-270M (xt330)

Lightweight, Efficient, and Türkiye-Focused AI

License: MIT Language: English HuggingFace


📖 Overview

Next-270M is a 270-million parameter causal language model based on Gemma 3, designed for efficiency, low-resource deployment, and reasoning-focused natural language understanding.

Key highlights:

  • Extremely lightweight — can run on consumer GPUs with low VRAM.
  • Optimized for text reasoning, summarization, and creative generation.
  • Supports Turkish natively while remaining multilingual.
  • Open-source and transparent for research and applications.

Ideal for developers, students, and organizations needing fast, reliable, and low-resource text-generation.


Our Next 1B and Next 4B models are leading to all of the tiny models in benchmarks.

Model MMLU (5-shot) % MMLU-Pro % GSM8K % MATH %
Next 4B preview Version s325 84.6 66.9 82.7 70.5
Next 1B Version t327 87.3 69.2 90.5 70.1
Qwen 3 0.6B 52.81 37.6 60.7 20.5
Llama 3.2 1B 49.3 44.4 11.9 30.6
Kumru 7B not verified 30.7 28.6 15.38 6.4

Also, our Next Z1 model is leading to state-of-the-art models in some of the Benchmarks.

Model MMLU (5-shot) % MMLU-Pro % GSM8K % MATH %
Next Z1 Version l294 97.3 94.2 97.7 93.2
Next Z1 Version l294 (no tool) 94.7 90.1 94.5 88.7
GPT 5 92.5 87.0 98.4 96.0
Claude Opus 4.1 (Thinking) ~92.0 87.8 84.7 95.4

🎯 Goals

  1. Lightweight Efficiency: Run smoothly on low-resource devices.
  2. Reasoning-Focused: Provide logical and coherent text outputs.
  3. Accessibility: Fully open-source with clear documentation.
  4. Multilingual Adaptability: Turkish-focused but supports other languages.

✨ Key Features

Feature Description
🔋 Lightweight Architecture Optimized for low VRAM usage; ideal for small GPUs or CPU deployment.
🇹🇷 Turkish & Multilingual Handles complex Turkish prompts accurately.
🧠 Reasoning Capabilities Logical chain-of-thought for question-answering and problem-solving.
📊 Consistent Outputs Reliable and reproducible results across multiple runs.
🌍 Open Source Transparent, research-friendly, and community-driven.

📐 Model Specifications

Specification Details
Base Model Gemma 3
Parameter Count 270 Million
Architecture Transformer, causal LLM
Fine-Tuning Method Instruction fine-tuning (SFT) with Turkish and multilingual datasets
Optimizations Quantization-ready (q8, f16, f32)
Use Cases Text generation, summarization, Q&A, creative writing, reasoning tasks

🚀 Installation & Usage

Use the model:

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "Lamapi/next-270m"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Chat message
messages = [
    {"role": "system", "content": "You are Next-X1, a smart and concise AI assistant trained by Lamapi. Always respond in the user's language. Proudly made in Turkey."},
    {"role": "user", "content": "Hello, how are you?"}
]

# Prepare input with Tokenizer
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt")

# Output from the model
output = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Hello, how are you?
I'm fine, thank you. How are you?

📄 License

MIT License — free to use, modify, and distribute. Attribution appreciated.


📞 Contact & Support


Next-270M — Lightweight, efficient, and reasoning-focused, bringing Turkey’s AI forward on low-resource hardware.

Follow on HuggingFace

Downloads last month
200
Safetensors
Model size
0.3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Lamapi/next-270m

Quantizations
1 model

Collection including Lamapi/next-270m