|
|
--- |
|
|
language: |
|
|
- ar |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- multiple-choice |
|
|
- text-classification |
|
|
pretty_name: Sentiment Analysis MCQ Evaluation Dataset |
|
|
tags: |
|
|
- sentiment-analysis |
|
|
- mcq |
|
|
- financial |
|
|
- arabic |
|
|
- evaluation |
|
|
- benchmark |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: test |
|
|
path: data/test-* |
|
|
dataset_info: |
|
|
features: |
|
|
- name: original_split |
|
|
dtype: string |
|
|
- name: choices |
|
|
sequence: string |
|
|
- name: text |
|
|
dtype: string |
|
|
- name: answer |
|
|
dtype: string |
|
|
- name: original_sentiment |
|
|
dtype: string |
|
|
- name: query |
|
|
dtype: string |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: gold |
|
|
dtype: int64 |
|
|
- name: category |
|
|
dtype: string |
|
|
splits: |
|
|
- name: test |
|
|
num_bytes: 495875 |
|
|
num_examples: 80 |
|
|
download_size: 218370 |
|
|
dataset_size: 495875 |
|
|
--- |
|
|
|
|
|
# Sentiment Analysis MCQ Evaluation Dataset |
|
|
|
|
|
Validation and test splits for financial sentiment analysis in MCQ format. |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
- **Format**: Multiple choice questions |
|
|
- **Language**: Arabic |
|
|
- **Domain**: Financial reports |
|
|
- **Task**: Sentiment classification |
|
|
- **Validation**: 20 examples |
|
|
- **Test**: 20 examples |
|
|
|
|
|
## Fields |
|
|
|
|
|
- `id`: Unique identifier |
|
|
- `query`: Full MCQ prompt |
|
|
- `answer`: Correct answer letter |
|
|
- `text`: Question text |
|
|
- `choices`: Answer options [a, b, c] |
|
|
- `gold`: Correct answer index |
|
|
- `category`: Report category |
|
|
- `original_sentiment`: Ground truth sentiment |
|
|
|
|
|
## Answer Mapping |
|
|
|
|
|
- a) positive - gold: 0 |
|
|
- b) negative - gold: 1 |
|
|
- c) neutral - gold: 2 |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("SahmBenchmark/Sentiment_Analysis_MCQ_eval") |
|
|
test_data = dataset['test'] |
|
|
|
|
|
for example in test_data: |
|
|
print(f"Question: {example['text']}") |
|
|
print(f"Choices: {example['choices']}") |
|
|
print(f"Correct: {example['answer']} (index: {example['gold']})") |
|
|
``` |
|
|
|