Safetensors
qwen2
Eval Results
File size: 6,310 Bytes
55d8496
 
 
 
 
 
 
465aad3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
09acebc
 
 
ed55cb4
 
 
 
 
09acebc
ed55cb4
 
09acebc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
465aad3
 
 
 
 
ed55cb4
 
 
 
 
 
 
 
 
 
465aad3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
---
license: apache-2.0
datasets:
- agentlans/crash-course
- vicgalle/configurable-system-prompt-multitask
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
model-index:
- name: Qwen2.5-0.5B-Instruct-CrashCourse-dropout
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: IFEval (0-Shot)
      type: wis-k/instruction-following-eval
      split: train
      args:
        num_few_shot: 0
    metrics:
    - type: inst_level_strict_acc and prompt_level_strict_acc
      value: 29.49
      name: averaged accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=agentlans%2FQwen2.5-0.5B-Instruct-CrashCourse-dropout
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: BBH (3-Shot)
      type: SaylorTwift/bbh
      split: test
      args:
        num_few_shot: 3
    metrics:
    - type: acc_norm
      value: 7.23
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=agentlans%2FQwen2.5-0.5B-Instruct-CrashCourse-dropout
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MATH Lvl 5 (4-Shot)
      type: lighteval/MATH-Hard
      split: test
      args:
        num_few_shot: 4
    metrics:
    - type: exact_match
      value: 0.08
      name: exact match
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=agentlans%2FQwen2.5-0.5B-Instruct-CrashCourse-dropout
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GPQA (0-shot)
      type: Idavidrein/gpqa
      split: train
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 1.79
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=agentlans%2FQwen2.5-0.5B-Instruct-CrashCourse-dropout
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MuSR (0-shot)
      type: TAUR-Lab/MuSR
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 1.11
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=agentlans%2FQwen2.5-0.5B-Instruct-CrashCourse-dropout
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU-PRO (5-shot)
      type: TIGER-Lab/MMLU-Pro
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 6.76
      name: accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=agentlans%2FQwen2.5-0.5B-Instruct-CrashCourse-dropout
      name: Open LLM Leaderboard
---
# Qwen2.5-0.5B-Instruct-CrashCourse-dropout

This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct), 
specifically adapted for enhanced performance on instructional and multitask scenarios. 
It leverages two datasets: [agentlans/crash-course](https://huggingface.co/datasets/agentlans/crash-course) and 
[vicgalle/configurable-system-prompt-multitask](https://huggingface.co/datasets/vicgalle/configurable-system-prompt-multitask) 
to improve its capabilities in handling diverse tasks and responding to various instruction formats.

> [!NOTE]  
> **Update:** Despite the poor benchmark, the model seems OK at slightly complex prompts. There's more finetuning potential here.

## Intended Use

This model is designed for:

- Answering questions related to crash course materials
- Handling configurable system prompts for multitask scenarios
- General instruction-following tasks

## Training Procedure

The model was fine-tuned on the specified datasets using the Qwen2.5-0.5B-Instruct as the base model.
More details on the training process will be added here later.

## Limitations

- The model's performance may be biased towards the specific domains covered in the training datasets.
- As with all language models, it may occasionally produce inaccurate or inconsistent outputs.
- The model's knowledge is limited to the information available in its training data and the base model's knowledge cutoff.

## Ethical Considerations

Users should be aware that this model, like all AI models, may reflect biases present in its training data. It's crucial to use the model responsibly and to verify important information from authoritative sources.

## Additional Information

For more details on the base model, please refer to the Qwen/Qwen2.5-0.5B-Instruct model card. For information about the datasets used in fine-tuning, check the respective dataset cards on the Hugging Face Hub.

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/agentlans__Qwen2.5-0.5B-Instruct-CrashCourse-dropout-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=agentlans%2FQwen2.5-0.5B-Instruct-CrashCourse-dropout&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!

|      Metric       | Qwen2.5-0.5B-Instruct-CrashCourse-dropout | Qwen2.5-0.5B-Instruct |
|-------------------|-----------------------------------------:|----------------------:|
| **Average**       |                               7.74 %     |                8.38 %  |
| IFEval (0-Shot)   |                              29.49 %     |               31.53 %  |
| BBH (3-Shot)      |                               7.23 %     |                8.17 %  |
| MATH Lvl 5 (4-Shot)|                               0.08 %     |                0.00 %  |
| GPQA (0-shot)     |                               1.79 %     |                1.23 %  |
| MuSR (0-shot)     |                               1.11 %     |                1.37 %  |
| MMLU-PRO (5-shot) |                               6.76 %     |                8.00 %  |