Commit 
							
							·
						
						4746847
	
1
								Parent(s):
							
							e1f69b7
								
End of training
Browse files- README.md +111 -0
- adapter_model.safetensors +1 -1
    	
        README.md
    ADDED
    
    | @@ -0,0 +1,111 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            ---
         | 
| 2 | 
            +
            license: apache-2.0
         | 
| 3 | 
            +
            library_name: peft
         | 
| 4 | 
            +
            tags:
         | 
| 5 | 
            +
            - generated_from_trainer
         | 
| 6 | 
            +
            base_model: distilgpt2
         | 
| 7 | 
            +
            model-index:
         | 
| 8 | 
            +
            - name: distilgpt-monolinugal
         | 
| 9 | 
            +
              results: []
         | 
| 10 | 
            +
            ---
         | 
| 11 | 
            +
             | 
| 12 | 
            +
            <!-- This model card has been generated automatically according to the information the Trainer had access to. You
         | 
| 13 | 
            +
            should probably proofread and complete it, then remove this comment. -->
         | 
| 14 | 
            +
             | 
| 15 | 
            +
            # distilgpt-monolinugal
         | 
| 16 | 
            +
             | 
| 17 | 
            +
            This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
         | 
| 18 | 
            +
            It achieves the following results on the evaluation set:
         | 
| 19 | 
            +
            - Loss: 3.4876
         | 
| 20 | 
            +
             | 
| 21 | 
            +
            ## Model description
         | 
| 22 | 
            +
             | 
| 23 | 
            +
            More information needed
         | 
| 24 | 
            +
             | 
| 25 | 
            +
            ## Intended uses & limitations
         | 
| 26 | 
            +
             | 
| 27 | 
            +
            More information needed
         | 
| 28 | 
            +
             | 
| 29 | 
            +
            ## Training and evaluation data
         | 
| 30 | 
            +
             | 
| 31 | 
            +
            More information needed
         | 
| 32 | 
            +
             | 
| 33 | 
            +
            ## Training procedure
         | 
| 34 | 
            +
             | 
| 35 | 
            +
            ### Training hyperparameters
         | 
| 36 | 
            +
             | 
| 37 | 
            +
            The following hyperparameters were used during training:
         | 
| 38 | 
            +
            - learning_rate: 0.0005
         | 
| 39 | 
            +
            - train_batch_size: 12
         | 
| 40 | 
            +
            - eval_batch_size: 12
         | 
| 41 | 
            +
            - seed: 42
         | 
| 42 | 
            +
            - gradient_accumulation_steps: 8
         | 
| 43 | 
            +
            - total_train_batch_size: 96
         | 
| 44 | 
            +
            - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
         | 
| 45 | 
            +
            - lr_scheduler_type: cosine
         | 
| 46 | 
            +
            - lr_scheduler_warmup_steps: 100
         | 
| 47 | 
            +
            - num_epochs: 8
         | 
| 48 | 
            +
            - mixed_precision_training: Native AMP
         | 
| 49 | 
            +
             | 
| 50 | 
            +
            ### Training results
         | 
| 51 | 
            +
             | 
| 52 | 
            +
            | Training Loss | Epoch | Step | Validation Loss |
         | 
| 53 | 
            +
            |:-------------:|:-----:|:----:|:---------------:|
         | 
| 54 | 
            +
            | 3.3098        | 0.16  | 200  | 3.5905          |
         | 
| 55 | 
            +
            | 3.2847        | 0.32  | 400  | 3.5644          |
         | 
| 56 | 
            +
            | 3.2612        | 0.48  | 600  | 3.5504          |
         | 
| 57 | 
            +
            | 3.2636        | 0.64  | 800  | 3.5384          |
         | 
| 58 | 
            +
            | 3.2481        | 0.8   | 1000 | 3.5301          |
         | 
| 59 | 
            +
            | 3.2393        | 0.96  | 1200 | 3.5233          |
         | 
| 60 | 
            +
            | 3.2381        | 1.12  | 1400 | 3.5184          |
         | 
| 61 | 
            +
            | 3.2317        | 1.28  | 1600 | 3.5168          |
         | 
| 62 | 
            +
            | 3.2244        | 1.44  | 1800 | 3.5123          |
         | 
| 63 | 
            +
            | 3.2258        | 1.6   | 2000 | 3.5117          |
         | 
| 64 | 
            +
            | 3.2238        | 1.76  | 2200 | 3.5058          |
         | 
| 65 | 
            +
            | 3.2376        | 1.92  | 2400 | 3.5058          |
         | 
| 66 | 
            +
            | 3.212         | 2.08  | 2600 | 3.5044          |
         | 
| 67 | 
            +
            | 3.231         | 2.24  | 2800 | 3.5019          |
         | 
| 68 | 
            +
            | 3.2044        | 2.4   | 3000 | 3.5003          |
         | 
| 69 | 
            +
            | 3.2107        | 2.57  | 3200 | 3.5002          |
         | 
| 70 | 
            +
            | 3.2096        | 2.73  | 3400 | 3.4996          |
         | 
| 71 | 
            +
            | 3.215         | 2.89  | 3600 | 3.4963          |
         | 
| 72 | 
            +
            | 3.2092        | 3.05  | 3800 | 3.4979          |
         | 
| 73 | 
            +
            | 3.2034        | 3.21  | 4000 | 3.4964          |
         | 
| 74 | 
            +
            | 3.1992        | 3.37  | 4200 | 3.4971          |
         | 
| 75 | 
            +
            | 3.1975        | 3.53  | 4400 | 3.4941          |
         | 
| 76 | 
            +
            | 3.222         | 3.69  | 4600 | 3.4932          |
         | 
| 77 | 
            +
            | 3.2104        | 3.85  | 4800 | 3.4927          |
         | 
| 78 | 
            +
            | 3.199         | 4.01  | 5000 | 3.4918          |
         | 
| 79 | 
            +
            | 3.2033        | 4.17  | 5200 | 3.4927          |
         | 
| 80 | 
            +
            | 3.201         | 4.33  | 5400 | 3.4924          |
         | 
| 81 | 
            +
            | 3.1947        | 4.49  | 5600 | 3.4931          |
         | 
| 82 | 
            +
            | 3.2172        | 4.65  | 5800 | 3.4907          |
         | 
| 83 | 
            +
            | 3.201         | 4.81  | 6000 | 3.4908          |
         | 
| 84 | 
            +
            | 3.2089        | 4.97  | 6200 | 3.4892          |
         | 
| 85 | 
            +
            | 3.206         | 5.13  | 6400 | 3.4896          |
         | 
| 86 | 
            +
            | 3.2074        | 5.29  | 6600 | 3.4884          |
         | 
| 87 | 
            +
            | 3.2046        | 5.45  | 6800 | 3.4891          |
         | 
| 88 | 
            +
            | 3.1899        | 5.61  | 7000 | 3.4888          |
         | 
| 89 | 
            +
            | 3.196         | 5.77  | 7200 | 3.4891          |
         | 
| 90 | 
            +
            | 3.1946        | 5.93  | 7400 | 3.4880          |
         | 
| 91 | 
            +
            | 3.1951        | 6.09  | 7600 | 3.4887          |
         | 
| 92 | 
            +
            | 3.1998        | 6.25  | 7800 | 3.4878          |
         | 
| 93 | 
            +
            | 3.1775        | 6.41  | 8000 | 3.4880          |
         | 
| 94 | 
            +
            | 3.1947        | 6.57  | 8200 | 3.4880          |
         | 
| 95 | 
            +
            | 3.1876        | 6.73  | 8400 | 3.4876          |
         | 
| 96 | 
            +
            | 3.1984        | 6.89  | 8600 | 3.4878          |
         | 
| 97 | 
            +
            | 3.1927        | 7.05  | 8800 | 3.4875          |
         | 
| 98 | 
            +
            | 3.2006        | 7.21  | 9000 | 3.4875          |
         | 
| 99 | 
            +
            | 3.2042        | 7.37  | 9200 | 3.4875          |
         | 
| 100 | 
            +
            | 3.1856        | 7.54  | 9400 | 3.4877          |
         | 
| 101 | 
            +
            | 3.1952        | 7.7   | 9600 | 3.4877          |
         | 
| 102 | 
            +
            | 3.1981        | 7.86  | 9800 | 3.4876          |
         | 
| 103 | 
            +
             | 
| 104 | 
            +
             | 
| 105 | 
            +
            ### Framework versions
         | 
| 106 | 
            +
             | 
| 107 | 
            +
            - PEFT 0.7.1
         | 
| 108 | 
            +
            - Transformers 4.36.2
         | 
| 109 | 
            +
            - Pytorch 1.13.0+cu116
         | 
| 110 | 
            +
            - Datasets 2.16.0
         | 
| 111 | 
            +
            - Tokenizers 0.15.0
         | 
    	
        adapter_model.safetensors
    CHANGED
    
    | @@ -1,3 +1,3 @@ | |
| 1 | 
             
            version https://git-lfs.github.com/spec/v1
         | 
| 2 | 
            -
            oid sha256: | 
| 3 | 
             
            size 315310960
         | 
|  | |
| 1 | 
             
            version https://git-lfs.github.com/spec/v1
         | 
| 2 | 
            +
            oid sha256:81d20fcce92c0be5e8d3b391bb75587cb9f2f54a05ab5a65c618dcbca081b450
         | 
| 3 | 
             
            size 315310960
         |