
LFM2-350M
LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
We're releasing the weights of three post-trained checkpoints with 350M, 700M, and 1.2B parameters. They provide the following key features to create AI-powered edge applications:
- Fast training & inference β LFM2 achieves 3x faster training compared to its previous generation. It also benefits from 2x faster decode and prefill speed on CPU compared to Qwen3.
- Best performance β LFM2 outperforms similarly-sized models across multiple benchmark categories, including knowledge, mathematics, instruction following, and multilingual capabilities.
- New architecture β LFM2 is a new hybrid Liquid model with multiplicative gates and short convolutions.
- Flexible deployment β LFM2 runs efficiently on CPU, GPU, and NPU hardware for flexible deployment on smartphones, laptops, or vehicles.
Find more information about LFM2 in our blog post.
π Model details
Due to their small size, we recommend fine-tuning LFM2 models on narrow use cases to maximize performance. They are particularly suited for agentic tasks, data extraction, RAG, creative writing, and multi-turn conversations. However, we do not recommend using them for tasks that are knowledge-intensive or require programming skills.
Property | Value |
---|---|
Parameters | 354,483,968 |
Layers | 16 (10 conv + 6 attn) |
Context length | 32,768 tokens |
Vocabulary size | 65,536 |
Precision | bfloat16 |
Training budget | 10 trillion tokens |
License | LFM Open License v1.0 |
Supported languages: English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish.
Generation parameters: We recommend the following parameters:
temperature=0.3
min_p=0.15
repetition_penalty=1.05
Architecture: Hybrid model with multiplicative gates and short convolutions: 10 double-gated short-range LIV convolution blocks and 6 grouped query attention (GQA) blocks.
Pre-training mixture: Approximately 75% English, 20% multilingual, and 5% code data sourced from the web and licensed materials.
Training approach:
- Knowledge distillation using LFM1-7B as teacher model
- Very large-scale SFT on 50% downstream tasks, 50% general domains
- Custom DPO with length normalization and semi-online datasets
- Iterative model merging
π How to run LFM2
Transformers.js
If you haven't already, you can install the Transformers.js JavaScript library from NPM using:
npm i @huggingface/transformers
Example: Basic example
import { pipeline, TextStreamer } from "@huggingface/transformers";
// Create a text generation pipeline
const generator = await pipeline(
"text-generation",
"onnx-community/LFM2-350M-ONNX",
{ dtype: "q4" },
);
// Define the list of messages
const messages = [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is the capital of France?" },
];
// Generate a response
const output = await generator(messages, {
max_new_tokens: 512,
do_sample: false,
streamer: new TextStreamer(generator.tokenizer, { skip_prompt: true, skip_special_tokens: true }),
});
console.log(output[0].generated_text.at(-1).content);
// The capital of France is Paris.
Example: Tool calling
import { AutoModelForCausalLM, AutoTokenizer, TextStreamer } from "@huggingface/transformers";
// Load tokenizer and model
const model_id = "onnx-community/LFM2-350M-ONNX";
const tokenizer = await AutoTokenizer.from_pretrained(model_id);
const model = await AutoModelForCausalLM.from_pretrained(
model_id, { dtype: "q4", device: "webgpu" },
);
// Define tools and messages
const tools = [
{
name: "get_weather",
description: "Get current weather information for a location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state, e.g. San Francisco, CA",
},
unit: {
type: "string",
enum: ["celsius", "fahrenheit"],
description: "The unit of temperature to use",
},
},
required: ["location"],
},
},
];
const messages = [
{
role: "user",
content: "What's the weather like in New York?"
},
];
// Prepare inputs
const input = tokenizer.apply_chat_template(messages, {
tools,
add_generation_prompt: true,
return_dict: true,
});
// Generate output
const sequences = await model.generate({
...input,
max_new_tokens: 512,
do_sample: false,
streamer: new TextStreamer(tokenizer, { skip_prompt: true, skip_special_tokens: false }),
});
// Decode and print the generated text
const response = tokenizer.batch_decode(
sequences.slice(null, [input.input_ids.dims[1], null]),
{ skip_special_tokens: true },
);
console.log(response[0]); // [get_weather(location="New York", unit="fahrenheit")]
ONNXRuntime
from transformers import AutoConfig, AutoTokenizer
import onnxruntime
import numpy as np
from huggingface_hub import hf_hub_download
# 1. Load config, processor, and model
model_id = "onnx-community/LFM2-350M-ONNX"
config = AutoConfig.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
filename = "model.onnx" # Options: "model.onnx", "model_fp16.onnx", "model_q4.onnx", "model_q4f16.onnx"
model_path = hf_hub_download(repo_id=model_id, filename=f"onnx/{filename}") # Download the graph
hf_hub_download(repo_id=model_id, filename=f"onnx/{filename}_data") # Download the weights
session = onnxruntime.InferenceSession(model_path)
## Set config values
num_key_value_heads = config.num_key_value_heads
head_dim = config.hidden_size // config.num_attention_heads
num_hidden_layers = config.num_hidden_layers
eos_token_id = config.eos_token_id
hidden_size = config.hidden_size
conv_L_cache = config.conv_L_cache
layer_types = config.layer_types
# 2. Prepare inputs
prompt = "What is C. elegans?"
messages = [{"role": "user", "content": prompt}]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="np")
input_ids = inputs['input_ids']
attention_mask = inputs['attention_mask']
batch_size = input_ids.shape[0]
position_ids = np.tile(np.arange(0, input_ids.shape[-1]), (batch_size, 1))
past_cache_values = {}
for i in range(num_hidden_layers):
if layer_types[i] == 'full_attention':
for kv in ('key', 'value'):
past_cache_values[f'past_key_values.{i}.{kv}'] = np.zeros([batch_size, num_key_value_heads, 0, head_dim], dtype=np.float32)
elif layer_types[i] == 'conv':
past_cache_values[f'past_conv.{i}'] = np.zeros([batch_size, hidden_size, conv_L_cache], dtype=np.float32)
else:
raise ValueError(f"Unsupported layer type: {layer_types[i]}")
# 3. Generation loop
max_new_tokens = 1024
generated_tokens = np.array([[]], dtype=np.int64)
for i in range(max_new_tokens):
logits, *present_cache_values = session.run(None, dict(
input_ids=input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
**past_cache_values,
))
## Update values for next generation loop
input_ids = logits[:, -1].argmax(-1, keepdims=True)
attention_mask = np.concatenate([attention_mask, np.ones_like(input_ids, dtype=np.int64)], axis=-1)
position_ids = position_ids[:, -1:] + 1
for j, key in enumerate(past_cache_values):
past_cache_values[key] = present_cache_values[j]
generated_tokens = np.concatenate([generated_tokens, input_ids], axis=-1)
if (input_ids == eos_token_id).all():
break
## (Optional) Streaming
print(tokenizer.decode(input_ids[0]), end='', flush=True)
print()
# 4. Output result
print(tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0])
- Downloads last month
- 104
Model tree for onnx-community/LFM2-350M-ONNX
Base model
LiquidAI/LFM2-350M