File size: 9,915 Bytes
3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe 3290319 cb26ffe |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 |
[INFO|parser.py:325] 2024-07-11 11:00:10,231 >> Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, compute dtype: None
[INFO|tokenization_utils_base.py:2159] 2024-07-11 11:00:10,234 >> loading file tokenizer.model
07/11/2024 11:00:10 - INFO - llamafactory.hparams.parser - Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, compute dtype: None
07/11/2024 11:00:10 - INFO - llamafactory.hparams.parser - Process rank: 3, device: cuda:3, n_gpu: 1, distributed training: True, compute dtype: None
[INFO|tokenization_utils_base.py:2159] 2024-07-11 11:00:10,234 >> loading file tokenizer.json
[INFO|tokenization_utils_base.py:2159] 2024-07-11 11:00:10,234 >> loading file added_tokens.json
[INFO|tokenization_utils_base.py:2159] 2024-07-11 11:00:10,234 >> loading file special_tokens_map.json
[INFO|tokenization_utils_base.py:2159] 2024-07-11 11:00:10,235 >> loading file tokenizer_config.json
[INFO|loader.py:50] 2024-07-11 11:00:10,286 >> Loading dataset dev_output.json...
07/11/2024 11:00:10 - INFO - llamafactory.hparams.parser - Process rank: 7, device: cuda:7, n_gpu: 1, distributed training: True, compute dtype: None
07/11/2024 11:00:10 - INFO - llamafactory.hparams.parser - Process rank: 5, device: cuda:5, n_gpu: 1, distributed training: True, compute dtype: None
07/11/2024 11:00:10 - INFO - llamafactory.hparams.parser - Process rank: 4, device: cuda:4, n_gpu: 1, distributed training: True, compute dtype: None
07/11/2024 11:00:11 - INFO - llamafactory.data.loader - Loading dataset dev_output.json...
07/11/2024 11:00:11 - INFO - llamafactory.data.loader - Loading dataset dev_output.json...
07/11/2024 11:00:11 - INFO - llamafactory.data.loader - Loading dataset dev_output.json...
07/11/2024 11:00:11 - INFO - llamafactory.data.loader - Loading dataset dev_output.json...
07/11/2024 11:00:11 - INFO - llamafactory.data.loader - Loading dataset dev_output.json...
07/11/2024 11:00:11 - INFO - llamafactory.data.loader - Loading dataset dev_output.json...
07/11/2024 11:00:11 - INFO - llamafactory.data.loader - Loading dataset dev_output.json...
[INFO|configuration_utils.py:731] 2024-07-11 11:00:12,504 >> loading configuration file saves/LLaMA2-7B-Chat/full/train_2024-07-11-09-30-54_llama2_inst_truth/config.json
[INFO|configuration_utils.py:800] 2024-07-11 11:00:12,505 >> Model config LlamaConfig {
"_name_or_path": "saves/LLaMA2-7B-Chat/full/train_2024-07-11-09-30-54_llama2_inst_truth",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 4096,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 32,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 10000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.42.3",
"use_cache": false,
"vocab_size": 32000
}
[INFO|patcher.py:81] 2024-07-11 11:00:12,505 >> Using KV cache for faster generation.
[INFO|modeling_utils.py:3553] 2024-07-11 11:00:12,529 >> loading weights file saves/LLaMA2-7B-Chat/full/train_2024-07-11-09-30-54_llama2_inst_truth/model.safetensors.index.json
[INFO|modeling_utils.py:1531] 2024-07-11 11:00:12,529 >> Instantiating LlamaForCausalLM model under default dtype torch.bfloat16.
[INFO|configuration_utils.py:1000] 2024-07-11 11:00:12,531 >> Generate config GenerationConfig {
"bos_token_id": 1,
"eos_token_id": 2
}
07/11/2024 11:00:12 - INFO - llamafactory.model.patcher - Using KV cache for faster generation.
07/11/2024 11:00:12 - INFO - llamafactory.model.patcher - Using KV cache for faster generation.
07/11/2024 11:00:12 - INFO - llamafactory.model.patcher - Using KV cache for faster generation.
07/11/2024 11:00:12 - INFO - llamafactory.model.patcher - Using KV cache for faster generation.
07/11/2024 11:00:12 - INFO - llamafactory.model.patcher - Using KV cache for faster generation.
07/11/2024 11:00:12 - INFO - llamafactory.model.patcher - Using KV cache for faster generation.
07/11/2024 11:00:12 - INFO - llamafactory.model.patcher - Using KV cache for faster generation.
07/11/2024 11:00:16 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference.
07/11/2024 11:00:16 - INFO - llamafactory.model.loader - all params: 6,738,415,616
[INFO|modeling_utils.py:4364] 2024-07-11 11:00:16,788 >> All model checkpoint weights were used when initializing LlamaForCausalLM.
07/11/2024 11:00:16 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference.
[INFO|modeling_utils.py:4372] 2024-07-11 11:00:16,788 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at saves/LLaMA2-7B-Chat/full/train_2024-07-11-09-30-54_llama2_inst_truth.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training.
[INFO|configuration_utils.py:953] 2024-07-11 11:00:16,792 >> loading configuration file saves/LLaMA2-7B-Chat/full/train_2024-07-11-09-30-54_llama2_inst_truth/generation_config.json
[INFO|configuration_utils.py:1000] 2024-07-11 11:00:16,792 >> Generate config GenerationConfig {
"bos_token_id": 1,
"do_sample": true,
"eos_token_id": 2,
"max_length": 4096,
"pad_token_id": 0,
"temperature": 0.6,
"top_p": 0.9
}
07/11/2024 11:00:16 - INFO - llamafactory.model.loader - all params: 6,738,415,616
[INFO|attention.py:80] 2024-07-11 11:00:16,798 >> Using torch SDPA for faster training and inference.
07/11/2024 11:00:16 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference.
07/11/2024 11:00:16 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference.
[INFO|loader.py:196] 2024-07-11 11:00:16,802 >> all params: 6,738,415,616
07/11/2024 11:00:16 - INFO - llamafactory.model.loader - all params: 6,738,415,616
07/11/2024 11:00:16 - INFO - llamafactory.model.loader - all params: 6,738,415,616
07/11/2024 11:00:16 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference.
07/11/2024 11:00:16 - INFO - llamafactory.model.loader - all params: 6,738,415,616
07/11/2024 11:00:16 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference.
07/11/2024 11:00:16 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference.
07/11/2024 11:00:16 - INFO - llamafactory.model.loader - all params: 6,738,415,616
07/11/2024 11:00:16 - INFO - llamafactory.model.loader - all params: 6,738,415,616
[INFO|trainer.py:3788] 2024-07-11 11:00:16,914 >>
***** Running Prediction *****
[INFO|trainer.py:3790] 2024-07-11 11:00:16,914 >> Num examples = 2554
[INFO|trainer.py:3793] 2024-07-11 11:00:16,914 >> Batch size = 2
07/11/2024 11:00:17 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
07/11/2024 11:00:17 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
[WARNING|logging.py:328] 2024-07-11 11:00:17,582 >> We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
07/11/2024 11:00:17 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
07/11/2024 11:00:17 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
07/11/2024 11:00:17 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
07/11/2024 11:00:17 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
07/11/2024 11:00:17 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
[INFO|trainer.py:127] 2024-07-11 11:00:34,679 >> Saving prediction results to saves/LLaMA2-7B-Chat/full/eval_2024-07-11-10-49-45/generated_predictions.jsonl
|