qwen25-logql-finetune-v1.2.0
Fine-tuned Gemma 3 12B for LogQL Query Generation [MLC]
This model is fine-tuned from Google's Gemma 3 12B to generate LogQL queries from natural language descriptions.
Model Information
- Base Model: gemma-3-12b
- Quantization: q4f16_1
- Format: MLC (Machine Learning Compilation)
- Use Case: LogQL query generation
Training Details
- Base model: gemma/gemma-3-12b-it
- Fine-tuned for LogQL query generation
- Trained on custom generated dataset (15000 samples, 5 epochs)
Usage
With MLC-LLM Python API
from mlc_llm import MLCEngine
# Create engine
engine = MLCEngine(
model="NiekWork/qwen25-logql-finetune-v1.2.0",
mode="local"
)
# Generate text
response = engine.chat.completions.create(
messages=[
]
)
print(response.choices[0].message.content)
With MLC-LLM CLI
mlc_llm chat NiekWork/qwen25-logql-finetune-v1.2.0
Files
This repository contains:
mlc-chat-config.json- MLC configurationndarray-cache.json- Weight metadataparams_shard_*.bin- Quantized model weightstokenizer.json- Tokenizertokenizer_config.json- Tokenizer configuration
About MLC-LLM
MLC LLM is a universal solution that allows any language model to be deployed natively on diverse hardware backends and native applications. More info at: https://llm.mlc.ai/
License
This model is released under the Apache 2.0 license.
- Downloads last month
- 12
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support