File size: 1,819 Bytes
d0e78c2
c79cffb
 
 
 
 
 
 
 
 
 
 
 
 
 
79bcaec
 
 
 
 
 
 
c008563
79bcaec
c008563
 
 
79bcaec
 
 
c008563
 
 
 
 
79bcaec
 
 
 
 
 
 
c008563
 
 
 
d0e78c2
c79cffb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
language:
- ar
license: apache-2.0
task_categories:
- multiple-choice
- text-classification
pretty_name: Sentiment Analysis MCQ Evaluation Dataset
tags:
- sentiment-analysis
- mcq
- financial
- arabic
- evaluation
- benchmark
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
dataset_info:
  features:
  - name: original_split
    dtype: string
  - name: choices
    sequence: string
  - name: text
    dtype: string
  - name: answer
    dtype: string
  - name: original_sentiment
    dtype: string
  - name: query
    dtype: string
  - name: id
    dtype: string
  - name: gold
    dtype: int64
  - name: category
    dtype: string
  splits:
  - name: test
    num_bytes: 495875
    num_examples: 80
  download_size: 218370
  dataset_size: 495875
---

# Sentiment Analysis MCQ Evaluation Dataset

Validation and test splits for financial sentiment analysis in MCQ format.

## Dataset Structure

- **Format**: Multiple choice questions
- **Language**: Arabic
- **Domain**: Financial reports
- **Task**: Sentiment classification
- **Validation**: 20 examples
- **Test**: 20 examples

## Fields

- `id`: Unique identifier
- `query`: Full MCQ prompt
- `answer`: Correct answer letter
- `text`: Question text
- `choices`: Answer options [a, b, c]
- `gold`: Correct answer index
- `category`: Report category
- `original_sentiment`: Ground truth sentiment

## Answer Mapping

- a) positive - gold: 0
- b) negative - gold: 1
- c) neutral - gold: 2

## Usage

```python
from datasets import load_dataset

dataset = load_dataset("SahmBenchmark/Sentiment_Analysis_MCQ_eval")
test_data = dataset['test']

for example in test_data:
    print(f"Question: {example['text']}")
    print(f"Choices: {example['choices']}")
    print(f"Correct: {example['answer']} (index: {example['gold']})")
```