Theta-Crucis-0.6B-Turbo1
Theta-Crucis-0.6B-Turbo1 is a compact, high-performance model designed for code generation, technical reasoning, and structured output tasks. Fine-tuned from Qwen3-0.6B using the Mixture of Thoughts (MoT) dataset with an emphasis on code expert clusters, this model delivers agile and accurate coding assistance in low-resource environments. At only 0.6B parameters, it offers strong fluency in programming, structured syntax, and technical language generation.
GGUF: https://huggingface.co/prithivMLmods/Theta-Crucis-0.6B-Turbo1-GGUF
Key Features
- MoT Fine-Tuning on Code Expert Clusters Leveraging the Mixture of Thoughts (MoT) dataset, this model is fine-tuned on high-quality programming data across languages, debugging patterns, and code reasoning structures. 
- Turbo Code Generation & Debugging Excels at generating well-structured, clean code in Python, JavaScript, C++, and more. Capable of explaining logic, identifying bugs, and suggesting improvements. 
- Structured Output Capabilities Supports outputs in Markdown, JSON, YAML, and LaTeX, making it ideal for auto-documentation, API formatting, and configuration file generation. 
- Technical Fluency Across Languages Handles code queries and explanations in over 20 languages, enabling global developer support and multilingual documentation. 
- Lightweight, Inference-Optimized Design Suitable for deployment on edge devices, laptops, or VRAM-limited GPUs, with fast inference and strong accuracy in technical prompts. 
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Theta-Crucis-0.6B-Turbo1"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Write a Python function that checks if a string is a palindrome. Explain each step."
messages = [
    {"role": "system", "content": "You are an expert code assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Intended Use
- Programming education, code synthesis, and debugging support
- Structured data and config file generation (e.g., JSON, YAML)
- Developer assistant roles in multilingual and technical environments
- Deployment on constrained devices with high code output needs
- Fast prototyping and script generation across languages
Limitations
- May underperform in long conversational or abstract language tasks
- Context length limitations can restrict multi-file or large project reasoning
- Not designed for creative writing or open-ended dialogue
- Focuses on technical and structured domains—general fluency is limited
References
- Downloads last month
- 4
Model tree for prithivMLmods/Theta-Crucis-0.6B-Turbo1
Base model
Qwen/Qwen3-0.6B-Base
