SmartPanel FunctionGemma 270M

Fine-tuned FunctionGemma 270M for on-device function-calling inside Brinq's SmartPanel manufacturing-assistant demo. Shipped on the Synaptics Astra SL2619 SoC (2×Cortex-A55 @ 2 GHz, 1 TOPS Torq/Coral NPU, 2 GB DDR4) at Embedded World 2026.

What this model does

Given a user utterance and a list of tool declarations, the model emits one or more <start_function_call>call:NAME{...}<end_function_call> blocks or a plain natural-language reply. It was trained specifically to hit sub-500 ms decode latency on the SL2619 without giving up tool-selection accuracy on the SmartPanel domain.

Scope. The fine-tune is specific to the SmartPanel tool schema (maintenance procedures, alarm acknowledgement, photo capture, knowledge lookup). It's published here as prior art / starting checkpoint for the related Coral Dev Board physical-AI demo at Google IO 2026, not as a general-purpose function-calling model.

Files

File Format Size Recommended use
smartpanel-v15-q4_k_m.gguf GGUF Q4_K_M 253 MB Production. Runs via llama.cpp on 2 GB / 2-core ARM targets.
smartpanel-v15-f16.gguf GGUF F16 543 MB Canonical checkpoint for re-quantization or further fine-tuning.
smartpanel-v12-q4_k_m.gguf GGUF Q4_K_M 253 MB Mid-production milestone.
smartpanel-v8-q4_k_m.gguf GGUF Q4_K_M 253 MB Device deployment milestone (what our SL2619 test boards have shipped with since Feb).
smartpanel-v4-q4_k_m.gguf GGUF Q4_K_M 253 MB First version with correct call: output format. Benchmark reference.

Recommended starting point: smartpanel-v15-q4_k_m.gguf.

Version lineage

Version Date Format Notes
v4 2026-01-18 call: First correct output format. 84.2% domain accuracy, 142 ms avg latency on local llama-cpp.
v8 2026-02-24 call: Deployed to Ollama on SL2619 test boards.
v8-moveworks 2026-02-26 call: Variant trained with additional Moveworks-flavored examples. Not included here.
v8-fixed 2026-02-27 call: Tokenizer hotfix.
v9–v13 Feb 27 – Mar 1 call: Data curation + prompt-template iterations.
v15 2026-03-03 call: Current production.

(v14 was trained but rolled forward into v15 before quantization — no separate artifact exists.)

Prompt format

FunctionGemma's native format. The tokenizer ships the <start_function_call>, <end_function_call>, <start_function_declaration>, <end_function_declaration>, <start_function_response>, <end_function_response>, and <start_of_turn> / <end_of_turn> special tokens.

<start_of_turn>user
You are a model that can do function calling with the following functions

<start_function_declaration>
declaration:set_led_color{description:<escape>Set RGB LED color<escape>,parameters:{...}}
<end_function_declaration>
<start_function_declaration>
declaration:play_buzzer{description:<escape>Sound the buzzer<escape>,parameters:{...}}
<end_function_declaration>

Turn the lights red and beep
<end_of_turn>
<start_of_turn>model
<start_function_call>call:set_led_color{color:<escape>red<escape>}<end_function_call><start_function_call>call:play_buzzer{pattern:<escape>beep<escape>}<end_function_call>
<end_of_turn>

Stop tokens: <end_of_turn>, <end_function_call>, <eos>. Recommended generation params: temperature=0.1, top_p=0.9, num_ctx=2048.

Usage

llama-cpp-python

from llama_cpp import Llama

llm = Llama(
    model_path="smartpanel-v15-q4_k_m.gguf",
    n_ctx=1024,
    n_threads=2,
    verbose=False,
)

prompt = """<start_of_turn>user
You are a model that can do function calling with the following functions
<start_function_declaration>
declaration:acknowledge_alarm{description:<escape>Dismiss the current alarm<escape>,parameters:{properties:{},required:[],type:<escape>OBJECT<escape>}}
<end_function_declaration>

Ack the alarm
<end_of_turn>
<start_of_turn>model
"""

out = llm(prompt, max_tokens=128, temperature=0.1, stop=["<end_of_turn>"])
print(out["choices"][0]["text"])

Ollama

# Download the gguf, then:
cat > Modelfile <<'EOF'
FROM ./smartpanel-v15-q4_k_m.gguf
PARAMETER temperature 0.1
PARAMETER num_ctx 2048
PARAMETER stop "<end_of_turn>"
PARAMETER stop "<end_function_call>"
PARAMETER stop "<eos>"
EOF

ollama create smartpanel -f Modelfile
ollama run smartpanel "Ack the alarm"

Benchmark (v3 / pre-v15, Jan 2026)

On SmartPanel domain (llama-cpp-python, Q4_K_M, local dev machine):

Model Domain Accuracy Avg Latency Output Format
Mobile Actions base mobile 100 % 178 ms call:
SmartPanel v1 smartpanel 66.7 % 355 ms declaration:
SmartPanel v2 smartpanel 36.8 % 135 ms ❌ partial output
SmartPanel v3 (precursor to v4) smartpanel 84.2 % 142 ms call:
Mobile Actions (cross-domain) smartpanel 66.7 % 159 ms call:

v15 numbers forthcoming — benchmarks live in the Brinq internal repo.

Training

  • Base: unsloth/functiongemma-270m-it (BF16)
  • Method: LoRA fine-tune via Unsloth + TRL (SFTTrainer)
  • Hardware: A100 80GB (Docker, unsloth image)
  • Quantization: llama.cpp convert_hf_to_gguf.py --outtype f16 then llama-quantize ... 15 (Q4_K_M)

Training scripts, curated datasets, and eval harnesses live in Brinq's internal repo (not public). For the related Coral demo's dataset generators and fine-tune recipe (which are shipping public), see BrinqAI/coral-functiongemma-demo (currently private, planned public around Google IO 2026).

License

Gemma Terms of Use. By using this model you agree to the terms at https://ai.google.dev/gemma/terms.

Citation

@misc{brinqai_smartpanel_functiongemma_2026,
  author       = {Brinq AI},
  title        = {SmartPanel FunctionGemma 270M},
  year         = 2026,
  publisher    = {Hugging Face},
  howpublished = {\url{https://huggingface.co/BrinqAI/smartpanel-functiongemma-270m}},
}

Acknowledgements

Downloads last month
134
GGUF
Model size
0.3B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for BrinqAI/smartpanel-functiongemma-270m

Quantized
(6)
this model