State-of-the-Art Multilingual Sentiment Analysis

Multilingual -> English, Chinese, Arabic, Dutch, French, Russian, Spanish, Turkish, etc.

Tired of the high costs, slow latency, and massive computational footprint of Large Language Models? This is the sentiment analysis model you've been waiting for.

deberta-v3-base-absa-v1.1 delivers state-of-the-art accuracy for fine-grained sentiment analysis with the speed, efficiency, and simplicity of a classic encoder model. It represents a paradigm shift in production-ready AI: maximum performance with minimum operational burden.

Why This Model?.

  • ๐ŸŽฏ Wide Usage: This model reaches One million downloads already! (Maybe) the most downloaded open-source ABSA model ever.
  • ๐Ÿ† SOTA Performance: Built on the powerful DeBERTa-v3 architecture and fine-tuned with advanced, context-aware methods from PyABSA, this model achieves top-tier accuracy on complex sentiment tasks.
  • โšก LLM-Free Efficiency: No need for A100s or massive GPU clusters. This model runs inference at a fraction of the computational cost, enabling real-time performance on standard CPUs or modest GPUs.
  • ๐Ÿ’ฐ Lower Costs: Slash your hosting and API call expenses. The small footprint and high efficiency translate directly to significant savings, whether you're a startup or an enterprise.
  • ๐Ÿš€ Production-Ready: Lightweight, fast, and reliable. This model is built to be deployed at scale for applications that demand immediate and accurate sentiment feedback.

Ideal Use Cases

This model excels where speed, cost, and precision are critical:

  • Real-time Social Media Monitoring: Analyze brand sentiment towards specific product features as it happens.
  • Intelligent Customer Support: Automatically route tickets based on the sentiment towards different aspects of a complaint.
  • Product Review Analysis: Aggregate fine-grained feedback on thousands of reviews to identify precise strengths and weaknesses.
  • Market Intelligence: Understand nuanced public opinion on key industry topics.

How to Use

Getting started is incredibly simple. You can use the Hugging Face pipeline for a zero-effort implementation.

from transformers import pipeline

Load the classifier pipeline - it's that easy.

classifier = pipeline("text-classification", model="yangheng/deberta-v3-base-absa-v1.1")
sentence = "The food was exceptional, although the service was a bit slow."

Analyze sentiment for the 'food' aspect

result_food = classifier(sentence, text_pair="food")
result_food ->
{
  'Negative': 0.989
  'Neutral': 0.008
  'Positive': 0.003
}

Analyze sentiment for the 'service' aspect from the same sentence

result_service = classifier("่ฟ™้ƒจๆ‰‹ๆœบ็š„ๆ€ง่ƒฝๅทฎๅŠฒ", text_pair="ๆ€ง่ƒฝ")
result_service = classifier("่ฟ™ๅฐๆฑฝ่ฝฆ็š„ๅผ•ๆ“ŽๆŽจๅŠ›ๅผบๅŠฒ", text_pair="ๅผ•ๆ“Ž")

The Technology Behind the Performance

Base Model

It starts with microsoft/deberta-v3-base, a highly optimized encoder known for its disentangled attention mechanism, which improves efficiency and performance over original BERT/RoBERTa models.

Fine-Tuning Architecture

It employs the FAST-LCF-BERT backbone trained from the PyABSA framework. This introduces a Local Context Focus (LCF) layer that dynamically guides the model to concentrate on the words and phrases most relevant to the given aspect, dramatically improving contextual understanding and accuracy.

Training Data

This model was trained on a robust, aggregated corpus of over 30,000 unique samples (augmented to ~180,000 examples) from canonical ABSA datasets, including SemEval-2014, SemEval-2016, MAMS, and more. The standard test sets were excluded to ensure fair and reliable benchmarking.

Citation

If you use this model in your research or application, please cite the foundational work on the PyABSA framework.

BibTeX Citation

@inproceedings{DBLP:conf/cikm/0008ZL23,
  author       = {Heng Yang and Chen Zhang and Ke Li},
  title        = {PyABSA: {A} Modularized Framework for Reproducible Aspect-based Sentiment Analysis},
  booktitle    = {Proceedings of the 32nd {ACM} International Conference on Information and Knowledge Management, {CIKM} 2023},
  pages        = {5117--5122},
  publisher    = {{ACM}},
  year         = {2023},
  doi          = {10.1145/3583780.3614752}
}

@article{YangZMT21,
  author       = {Heng Yang and Biqing Zeng and Mayi Xu and Tianxing Wang},
  title        = {Back to Reality: Leveraging Pattern-driven Modeling to Enable Affordable Sentiment Dependency Learning},
  journal      = {CoRR},
  volume       = {abs/2110.08604},
  year         = {2021},
  url          = {https://arxiv.org/abs/2110.08604},
}
Downloads last month
88,062
Safetensors
Model size
184M params
Tensor type
I64
ยท
F32
ยท
Inference Providers NEW

Model tree for yangheng/deberta-v3-base-absa-v1.1

Adapters
1 model
Finetunes
1 model

Spaces using yangheng/deberta-v3-base-absa-v1.1 5

Collection including yangheng/deberta-v3-base-absa-v1.1