๐ Zira-Z.1 ๐
The Bilingual Beast Built on Qwen 2.5 (7B)
๐ง Model Highlights
Zira-Z.1 isn't just a model โ it's a revolution in understanding both English and Hinglish.
Born from the powerful DNA of Qwen 2.5 (7B), this multilingual marvel was fine-tuned for raw text generation across two of the most widely spoken languages in the world.
- ๐ฅ Base: Qwen 2.5 - 7B (One of the finest open LLMs out there)
- ๐ฃ๏ธ Languages: English ๐ฌ๐ง + Hinglish ๐ฎ๐ณ (Code-mixed, no pure Hindi)
- ๐ง Training: Fine-tuned on diverse bilingual corpora โ clean, simple text format (non-instruct)
- ๐ฆพ Purpose: General-purpose text generation, especially where English and Hinglish blend naturally
Please NOTE that this is a basic text generation model and lacks coherence in its output; the release of the new instruct model has been delayed due to resource constraints, with an expected launch in approximately 5 days.
๐ Why Zira-Z.1?
Because multilingual LLMs are cool.
But Zira-Z.1 is cooler. ๐
- ๐ Code-switching? Natural.
- โ๏ธ Generates culturally fluent, relatable Hinglish.
- ๐ Handles casual text, commentary, social chatter, and more.
- ๐ฏ Perfect for early-stage Indic bilingual applications and experimentation
๐ Training Curve
She trained hard, and it shows...
๐ ๏ธ Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("HyperX-Sen/Zira-Z.1")
model = AutoModelForCausalLM.from_pretrained("HyperX-Sen/Zira-Z.1")
inputs = tokenizer("Tum kya soch rahe ho about AI?", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))'
๐งฌ License & Contribution
- ๐ License: Open for research & commercial use (see LICENSE)
- ๐ค Contributions: Welcomed with open arms (and open pull requests)
Made with โค๏ธ, logic, and a lot of chai โ
- Downloads last month
- 15
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support