aymanbakiri commited on
Commit
f64a9ab
verified
1 Parent(s): 84489cb

Add model card

Browse files
Files changed (1) hide show
  1. README.md +36 -43
README.md CHANGED
@@ -1,62 +1,55 @@
1
  ---
2
- library_name: peft
3
- base_model: AnnaelleMyriam/SFT_M3_model
4
  tags:
5
- - mcqa
6
  - question-answering
 
 
7
  - sft
8
  - lora
9
- - qwen
10
- - unsloth
11
- - generated_from_trainer
12
- model-index:
13
- - name: MNLP_M3_mcqa_model_test
14
- results: []
15
  ---
16
 
17
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
- should probably proofread and complete it, then remove this comment. -->
19
-
20
- # MNLP_M3_mcqa_model_test
21
-
22
- This model is a fine-tuned version of [AnnaelleMyriam/SFT_M3_model](https://huggingface.co/AnnaelleMyriam/SFT_M3_model) on an unknown dataset.
23
-
24
- ## Model description
25
-
26
- More information needed
27
 
28
- ## Intended uses & limitations
 
 
29
 
30
- More information needed
31
 
32
- ## Training and evaluation data
 
 
 
33
 
34
- More information needed
35
 
36
- ## Training procedure
 
37
 
38
- ### Training hyperparameters
 
39
 
40
- The following hyperparameters were used during training:
41
- - learning_rate: 2e-05
42
- - train_batch_size: 4
43
- - eval_batch_size: 8
44
- - seed: 42
45
- - gradient_accumulation_steps: 2
46
- - total_train_batch_size: 8
47
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
- - lr_scheduler_type: cosine
49
- - lr_scheduler_warmup_ratio: 0.05
50
- - num_epochs: 1
51
 
52
- ### Training results
 
 
 
 
53
 
 
54
 
 
 
 
 
55
 
56
- ### Framework versions
57
 
58
- - PEFT 0.15.2
59
- - Transformers 4.52.4
60
- - Pytorch 2.7.0+cu126
61
- - Datasets 3.6.0
62
- - Tokenizers 0.21.1
 
1
  ---
2
+ language: en
3
+ license: apache-2.0
4
  tags:
5
+ - text-generation
6
  - question-answering
7
+ - mcqa
8
+ - merged
9
  - sft
10
  - lora
11
+ base_model: AnnaelleMyriam/SFT_M3_model
 
 
 
 
 
12
  ---
13
 
14
+ # MNLP M3 MCQA Merged Model
 
 
 
 
 
 
 
 
 
15
 
16
+ This model is a merged version of:
17
+ - **Base SFT Model**: `AnnaelleMyriam/SFT_M3_model`
18
+ - **LoRA Adapter**: `aymanbakiri/MNLP_M3_mcqa_model_adapter`
19
 
20
+ ## Model Description
21
 
22
+ This is a specialized model for Multiple Choice Question Answering (MCQA) tasks, created by:
23
+ 1. Starting with the SFT model `AnnaelleMyriam/SFT_M3_model`
24
+ 2. Fine-tuning with LoRA adapters on MCQA data
25
+ 3. Merging the LoRA weights back into the base model
26
 
27
+ ## Usage
28
 
29
+ ```python
30
+ from transformers import AutoModelForCausalLM, AutoTokenizer
31
 
32
+ model = AutoModelForCausalLM.from_pretrained("aymanbakiri/MNLP_M3_mcqa_model_test")
33
+ tokenizer = AutoTokenizer.from_pretrained("aymanbakiri/MNLP_M3_mcqa_model_test")
34
 
35
+ # Example usage for MCQA
36
+ prompt = """Question: What is the capital of France?
37
+ Options: (A) London (B) Berlin (C) Paris (D) Madrid
38
+ Answer:"""
 
 
 
 
 
 
 
39
 
40
+ inputs = tokenizer(prompt, return_tensors="pt")
41
+ outputs = model.generate(**inputs, max_new_tokens=5)
42
+ answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
43
+ print(answer)
44
+ ```
45
 
46
+ ## Training Details
47
 
48
+ - Base Model: SFT model fine-tuned for instruction following
49
+ - LoRA Configuration: r=16, alpha=32, dropout=0.1
50
+ - Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj, lm_head
51
+ - Training Data: MNLP M2 MCQA Dataset
52
 
53
+ ## Performance
54
 
55
+ This merged model should provide better performance than the original LoRA adapter while being easier to deploy and use.