Abstract
Bielik 11B v2, a scaled language model with 11B parameters, excels on Polish benchmarks through Weighted Instruction Cross-Entropy Loss and Adaptive Learning Rate, outperforming larger models and demonstrating resource-efficient deployment.
We present Bielik 11B v2, a state-of-the-art language model optimized for Polish text processing. Built on the Mistral 7B v0.2 architecture and scaled to 11B parameters using depth up-scaling, this model demonstrates exceptional performance across Polish language benchmarks while maintaining strong cross-lingual capabilities. We introduce two key technical innovations: Weighted Instruction Cross-Entropy Loss, which optimizes learning across diverse instruction types by assigning quality-based weights to training examples, and Adaptive Learning Rate, which dynamically adjusts based on context length. Comprehensive evaluation across multiple benchmarks demonstrates that Bielik 11B v2 outperforms many larger models, including those with 2-6 times more parameters, and significantly surpasses other specialized Polish language models on tasks ranging from linguistic understanding to complex reasoning. The model's parameter efficiency and extensive quantization options enable deployment across various hardware configurations, advancing Polish language AI capabilities and establishing new benchmarks for resource-efficient language modeling in less-represented languages.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Bielik v3 Small: Technical Report (2025)
- Trillion 7B Technical Report (2025)
- Compass-V2 Technical Report (2025)
- BitNet b1.58 2B4T Technical Report (2025)
- Llama-3-Nanda-10B-Chat: An Open Generative Large Language Model for Hindi (2025)
- Pangu Ultra: Pushing the Limits of Dense Large Language Models on Ascend NPUs (2025)
- DeepDistill: Enhancing LLM Reasoning Capabilities via Large-Scale Difficulty-Graded Data Training (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
an audio overview for learning on the go:
Models citing this paper 4
Datasets citing this paper 0
No dataset linking this paper