FinBERT: Financial Sentiment Analysis with Pre-trained Language Models
Paper β’ 1908.10063 β’ Published β’ 3
ONNX INT8 quantized version of ProsusAI/finbert for efficient financial text embeddings.
| Property | Value |
|---|---|
| Base Model | ProsusAI/finbert |
| Format | ONNX |
| Quantization | INT8 (dynamic quantization) |
| Embedding Dimension | 768 |
| Quantized by | JustEmbed |
This is a quantized ONNX export of FinBERT, a BERT model further pre-trained on financial text by Prosus AI. The INT8 quantization reduces model size and improves inference speed while maintaining high accuracy for financial domain embeddings.
model_quantized.onnx β INT8 quantized ONNX modeltokenizer.json β Fast tokenizervocab.txt β Vocabulary fileconfig.json β Model configurationfrom justembed import Embedder
embedder = Embedder("finbert-int8")
vectors = embedder.embed(["quarterly earnings exceeded expectations"])
import onnxruntime as ort
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(".")
session = ort.InferenceSession("model_quantized.onnx")
inputs = tokenizer("quarterly earnings exceeded expectations", return_tensors="np")
outputs = session.run(None, dict(inputs))
This model is a derivative work of ProsusAI/finbert.
The original model is licensed under Apache License 2.0. This quantized version is distributed under the same license. See the LICENSE file for the full text.
@article{araci2019finbert,
title={FinBERT: Financial Sentiment Analysis with Pre-Trained Language Models},
author={Araci, Dogu},
journal={arXiv preprint arXiv:1908.10063},
year={2019}
}
Base model
ProsusAI/finbert