|  | --- | 
					
						
						|  | library_name: transformers | 
					
						
						|  | license: llama3.1 | 
					
						
						|  | base_model: meta-llama/Meta-Llama-3.1-8B-Instruct | 
					
						
						|  | tags: | 
					
						
						|  | - alignment-handbook | 
					
						
						|  | - trl | 
					
						
						|  | - sft | 
					
						
						|  | - generated_from_trainer | 
					
						
						|  | - trl | 
					
						
						|  | - sft | 
					
						
						|  | - generated_from_trainer | 
					
						
						|  | datasets: | 
					
						
						|  | - barc0/transduction_20k_gpt4o-mini_generated_problems_seed100.jsonl_messages_format_0.3 | 
					
						
						|  | model-index: | 
					
						
						|  | - name: transduction-run8-20k-seed100-gpt4omini-instruct-fft_lr1e-5_epoch2 | 
					
						
						|  | results: [] | 
					
						
						|  | --- | 
					
						
						|  |  | 
					
						
						|  | <!-- This model card has been generated automatically according to the information the Trainer had access to. You | 
					
						
						|  | should probably proofread and complete it, then remove this comment. --> | 
					
						
						|  |  | 
					
						
						|  | # transduction-run8-20k-seed100-gpt4omini-instruct-fft_lr1e-5_epoch2 | 
					
						
						|  |  | 
					
						
						|  | This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the barc0/transduction_20k_gpt4o-mini_generated_problems_seed100.jsonl_messages_format_0.3 dataset. | 
					
						
						|  | It achieves the following results on the evaluation set: | 
					
						
						|  | - Loss: 0.0615 | 
					
						
						|  |  | 
					
						
						|  | ## Model description | 
					
						
						|  |  | 
					
						
						|  | More information needed | 
					
						
						|  |  | 
					
						
						|  | ## Intended uses & limitations | 
					
						
						|  |  | 
					
						
						|  | More information needed | 
					
						
						|  |  | 
					
						
						|  | ## Training and evaluation data | 
					
						
						|  |  | 
					
						
						|  | More information needed | 
					
						
						|  |  | 
					
						
						|  | ## Training procedure | 
					
						
						|  |  | 
					
						
						|  | ### Training hyperparameters | 
					
						
						|  |  | 
					
						
						|  | The following hyperparameters were used during training: | 
					
						
						|  | - learning_rate: 1e-05 | 
					
						
						|  | - train_batch_size: 8 | 
					
						
						|  | - eval_batch_size: 4 | 
					
						
						|  | - seed: 8 | 
					
						
						|  | - distributed_type: multi-GPU | 
					
						
						|  | - num_devices: 8 | 
					
						
						|  | - gradient_accumulation_steps: 2 | 
					
						
						|  | - total_train_batch_size: 128 | 
					
						
						|  | - total_eval_batch_size: 32 | 
					
						
						|  | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 | 
					
						
						|  | - lr_scheduler_type: cosine | 
					
						
						|  | - lr_scheduler_warmup_ratio: 0.1 | 
					
						
						|  | - num_epochs: 2 | 
					
						
						|  |  | 
					
						
						|  | ### Training results | 
					
						
						|  |  | 
					
						
						|  | | Training Loss | Epoch  | Step | Validation Loss | | 
					
						
						|  | |:-------------:|:------:|:----:|:---------------:| | 
					
						
						|  | | 0.0692        | 0.9966 | 145  | 0.0734          | | 
					
						
						|  | | 0.0517        | 1.9931 | 290  | 0.0615          | | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  | ### Framework versions | 
					
						
						|  |  | 
					
						
						|  | - Transformers 4.45.0.dev0 | 
					
						
						|  | - Pytorch 2.4.0+cu121 | 
					
						
						|  | - Datasets 3.0.1 | 
					
						
						|  | - Tokenizers 0.19.1 | 
					
						
						|  |  |