File size: 3,383 Bytes
c21a105 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
---
library_name: peft
license: llama2
base_model: neurotechnology/Lt-Llama-2-13b-instruct-hf
tags:
- generated_from_trainer
model-index:
- name: outputs/anon-lt-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0.dev0`
```yaml
adapter: lora
base_model: neurotechnology/Lt-Llama-2-13b-instruct-hf
# mixed precision
bf16: auto
# data & splitting
dataset_processes: 32
datasets:
# βββββββββββ TRAIN βββββββββββ
- path: .
type: alpaca
data_files: ["train.json"]
message_property_mappings:
role: role
content: content
validation_datasets:
# ββββββββββ VALIDATION ββββββββββ
- path: .
type: alpaca
data_files: ["validation.json"]
message_property_mappings:
role: role
content: content
# weβre using explicit splits above, so no HF split / inline splitting:
val_set_size: 0.0
shuffle_merged_datasets: false
# LoRA hyperparameters
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
# optimizer & schedule
optimizer: adamw_bnb_8bit
learning_rate: 2e-4
lr_scheduler: cosine
weight_decay: 0.0
# batching & accumulation
micro_batch_size: 16
gradient_accumulation_steps: 1
gradient_checkpointing: true
# training loop
num_epochs: 3
max_prompt_len: 512
sequence_len: 4096
train_on_inputs: false
# precision & quantization
load_in_8bit: true
load_in_4bit: false
qlora_sharded_model_loading: false
# resource config
use_ray: false
ray_num_workers: 1
resources_per_worker:
GPU: 1
# output & checkpointing
output_dir: ./outputs/anon-lt-lora
save_safetensors: true
save_only_model: false
load_best_model_at_end: true
pretrain_multipack_attn: true
pretrain_multipack_buffer_size: 10000
trl:
log_completions: false
ref_model_sync_steps: 64
ref_model_mixup_alpha: 0.9
sync_ref_model: false
use_vllm: false
vllm_device: auto
vllm_dtype: auto
vllm_gpu_memory_utilization: 0.9
```
</details><br>
# outputs/anon-lt-lora
This model is a fine-tuned version of [neurotechnology/Lt-Llama-2-13b-instruct-hf](https://huggingface.co/neurotechnology/Lt-Llama-2-13b-instruct-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |