--- datasets: - prithivMLmods/Poseidon-Reasoning-5M license: apache-2.0 language: - en base_model: - Qwen/Qwen3-1.7B library_name: transformers tags: - text-generation-inference - moe - code - science - biology - chemistry - thinking pipeline_tag: text-generation --- ![1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/vXEwxMVMiov1zhOFUt6AJ.png) # **Poseidon-Reasoning-1.7B** > **Poseidon-Reasoning-1.7B** is a general-purpose, high-efficiency reasoning model fine-tuned on **Qwen3-1.7B** using the **Poseidon-Reasoning-5M** dataset (first 70K entries). Designed for **mathematical, scientific, and code-related reasoning**, this model strikes a balance between structured logic and contextual fluency—ideal for domains demanding symbolic precision and algorithmic thought. > \[!note] > GGUF: [https://huggingface.co/prithivMLmods/Poseidon-Reasoning-1.7B-GGUF](https://huggingface.co/prithivMLmods/Poseidon-Reasoning-1.7B-GGUF) ## **Key Features** 1. **Versatile Reasoning Model** Finely tuned for multi-domain reasoning tasks, including mathematics, scientific computation, and code logic—capable of navigating structured problem-solving and analytic workflows. 2. **Qwen3-1.7B Foundation** Built upon **Qwen3-1.7B**, providing multilingual reasoning capability, efficient token handling, and strong alignment with instruction-following tasks. 3. **Powered by Poseidon-Reasoning-5M (70K Sample Subset)** Trained on a carefully selected subset of 70K entries from the **Poseidon-Reasoning-5M** dataset—focusing on tasks that emphasize **symbolic accuracy**, **step-by-step thinking**, and **STEM-relevant clarity**. 4. **Balanced Thinking Mode** Supports structured, guided thinking without excessive hallucination or unnecessary verbosity. Ideal for prompt-driven logic tasks with moderate complexity. 5. **Rich Format Output** Outputs include **Markdown**, **Python**, **LaTeX**, and tabular structures—helpful for notebooks, scientific documentation, and programmatic outputs. 6. **1.7B Parameter Footprint** Lightweight enough to run on **mid-tier GPUs or CPU-only environments**, while offering scalable reasoning power for research, teaching, and light automation. ## **Quickstart with Transformers** ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/Poseidon-Reasoning-1.7B" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Solve: What is the derivative of sin(x) * ln(x)?" messages = [ {"role": "system", "content": "You are a structured reasoning assistant for math, science, and code."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=256 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ## **Intended Use** * General-purpose symbolic reasoning * Math and science tutoring, theorem solving, and computational guidance * Structured coding under constraints or STEM-based tasks * Lightweight environments where interpretability and precision matter * Prompt-driven reasoning with deterministic steps ## **Limitations** * Not designed for broad open-domain conversation * May underperform on creative writing or emotional expression * Best results occur with **clear problem statements and goal-directed prompts** * Less suitable for speculative or abstract reasoning without structure