YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

🧠 SentimentClassifier-BERT-RegulatoryCompliance

A BERT-based sentiment analysis model fine-tuned on regulatory feedback and compliance-related text. This model classifies input text into Positive, Neutral, or Negative, making it well-suited for analyzing complaints, formal feedback, and regulatory communication.


✨ Model Highlights

  • πŸ“Œ Based on bert-base-uncased
  • πŸ” Fine-tuned on a custom dataset of labeled regulatory feedback
  • ⚑ Supports prediction of 3 classes: Positive, Neutral, Negative
  • 🧠 Built using Hugging Face Transformers and PyTorch

🧠 Intended Uses

  • βœ… Regulatory and compliance feedback classification
  • βœ… Complaint monitoring and triaging
  • βœ… Customer sentiment analysis for compliance departments

🚫 Limitations

  • ❌ Not optimized for multi-language input (English only)
  • πŸ“ Input longer than 128 tokens will be truncated
  • πŸ€” Model may misinterpret informal or slang language
  • ⚠️ Not intended to replace expert human judgment in legal matters

πŸ‹οΈβ€β™‚οΈ Training Details

Attribute Value
Base Model bert-base-uncased
Dataset Custom .txt file with feedbacks
Labels Negative (0), Neutral (1), Positive (2)
Max Token Length 128
Epochs 3
Batch Size 16
Optimizer AdamW
Loss Function CrossEntropyLoss
Framework PyTorch + Transformers
Hardware CUDA-enabled GPU

πŸ“Š Evaluation Metrics

Metric Score
Accuracy 0.84
Precision 0.85
Recall 0.84
F1 Score 0.85

πŸ”Ž Label Mapping

Label ID Sentiment
0 Negative
1 Neutral
2 Positive

πŸš€ Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import torch.nn.functional as F

model_name = "your-username/sentiment-bert-regulatory-compliance"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
model.eval()

def predict(text):
    inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128)
    with torch.no_grad():
        outputs = model(**inputs)
        probs = F.softmax(outputs.logits, dim=1)
        pred = torch.argmax(probs, dim=1).item()
        label_map = {0: "Negative", 1: "Neutral", 2: "Positive"}
        return f"Sentiment: {label_map[pred]} (Confidence: {probs[0][pred]:.2f})"

# Example
print(predict("The issue was resolved promptly and professionally."))

πŸ“ Repository Structure

bash
Copy
Edit
.
β”œβ”€β”€ model/               # Fine-tuned model files (pytorch_model.bin, config.json)
β”œβ”€β”€ tokenizer/           # Tokenizer config and vocab
β”œβ”€β”€ training_script.py   # Training code
β”œβ”€β”€ feedbacks.txt        # Source dataset
β”œβ”€β”€ README.md            # Model card

🀝 Contributing

Contributions are welcome! Feel free to open an issue or pull request to improve the model or its documentation.

Downloads last month
2
Safetensors
Model size
0.1B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support