Poseidon-Reasoning-1.7B
Poseidon-Reasoning-1.7B is a general-purpose, high-efficiency reasoning model fine-tuned on Qwen3-1.7B using the Poseidon-Reasoning-5M dataset (first 70K entries). Designed for mathematical, scientific, and code-related reasoning, this model strikes a balance between structured logic and contextual fluency—ideal for domains demanding symbolic precision and algorithmic thought.
GGUF: https://huggingface.co/prithivMLmods/Poseidon-Reasoning-1.7B-GGUF
Key Features
Versatile Reasoning Model Finely tuned for multi-domain reasoning tasks, including mathematics, scientific computation, and code logic—capable of navigating structured problem-solving and analytic workflows.
Qwen3-1.7B Foundation Built upon Qwen3-1.7B, providing multilingual reasoning capability, efficient token handling, and strong alignment with instruction-following tasks.
Powered by Poseidon-Reasoning-5M (70K Sample Subset) Trained on a carefully selected subset of 70K entries from the Poseidon-Reasoning-5M dataset—focusing on tasks that emphasize symbolic accuracy, step-by-step thinking, and STEM-relevant clarity.
Balanced Thinking Mode Supports structured, guided thinking without excessive hallucination or unnecessary verbosity. Ideal for prompt-driven logic tasks with moderate complexity.
Rich Format Output Outputs include Markdown, Python, LaTeX, and tabular structures—helpful for notebooks, scientific documentation, and programmatic outputs.
1.7B Parameter Footprint Lightweight enough to run on mid-tier GPUs or CPU-only environments, while offering scalable reasoning power for research, teaching, and light automation.
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Poseidon-Reasoning-1.7B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Solve: What is the derivative of sin(x) * ln(x)?"
messages = [
{"role": "system", "content": "You are a structured reasoning assistant for math, science, and code."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=256
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Intended Use
- General-purpose symbolic reasoning
- Math and science tutoring, theorem solving, and computational guidance
- Structured coding under constraints or STEM-based tasks
- Lightweight environments where interpretability and precision matter
- Prompt-driven reasoning with deterministic steps
Limitations
- Not designed for broad open-domain conversation
- May underperform on creative writing or emotional expression
- Best results occur with clear problem statements and goal-directed prompts
- Less suitable for speculative or abstract reasoning without structure
- Downloads last month
- 63