| [INFO|parser.py:325] 2024-07-11 11:00:10,231 >> Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, compute dtype: None | |
| [INFO|tokenization_utils_base.py:2159] 2024-07-11 11:00:10,234 >> loading file tokenizer.model | |
| 07/11/2024 11:00:10 - INFO - llamafactory.hparams.parser - Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, compute dtype: None | |
| 07/11/2024 11:00:10 - INFO - llamafactory.hparams.parser - Process rank: 3, device: cuda:3, n_gpu: 1, distributed training: True, compute dtype: None | |
| [INFO|tokenization_utils_base.py:2159] 2024-07-11 11:00:10,234 >> loading file tokenizer.json | |
| [INFO|tokenization_utils_base.py:2159] 2024-07-11 11:00:10,234 >> loading file added_tokens.json | |
| [INFO|tokenization_utils_base.py:2159] 2024-07-11 11:00:10,234 >> loading file special_tokens_map.json | |
| [INFO|tokenization_utils_base.py:2159] 2024-07-11 11:00:10,235 >> loading file tokenizer_config.json | |
| [INFO|loader.py:50] 2024-07-11 11:00:10,286 >> Loading dataset dev_output.json... | |
| 07/11/2024 11:00:10 - INFO - llamafactory.hparams.parser - Process rank: 7, device: cuda:7, n_gpu: 1, distributed training: True, compute dtype: None | |
| 07/11/2024 11:00:10 - INFO - llamafactory.hparams.parser - Process rank: 5, device: cuda:5, n_gpu: 1, distributed training: True, compute dtype: None | |
| 07/11/2024 11:00:10 - INFO - llamafactory.hparams.parser - Process rank: 4, device: cuda:4, n_gpu: 1, distributed training: True, compute dtype: None | |
| 07/11/2024 11:00:11 - INFO - llamafactory.data.loader - Loading dataset dev_output.json... | |
| 07/11/2024 11:00:11 - INFO - llamafactory.data.loader - Loading dataset dev_output.json... | |
| 07/11/2024 11:00:11 - INFO - llamafactory.data.loader - Loading dataset dev_output.json... | |
| 07/11/2024 11:00:11 - INFO - llamafactory.data.loader - Loading dataset dev_output.json... | |
| 07/11/2024 11:00:11 - INFO - llamafactory.data.loader - Loading dataset dev_output.json... | |
| 07/11/2024 11:00:11 - INFO - llamafactory.data.loader - Loading dataset dev_output.json... | |
| 07/11/2024 11:00:11 - INFO - llamafactory.data.loader - Loading dataset dev_output.json... | |
| [INFO|configuration_utils.py:731] 2024-07-11 11:00:12,504 >> loading configuration file saves/LLaMA2-7B-Chat/full/train_2024-07-11-09-30-54_llama2_inst_truth/config.json | |
| [INFO|configuration_utils.py:800] 2024-07-11 11:00:12,505 >> Model config LlamaConfig { | |
| "_name_or_path": "saves/LLaMA2-7B-Chat/full/train_2024-07-11-09-30-54_llama2_inst_truth", | |
| "architectures": [ | |
| "LlamaForCausalLM" | |
| ], | |
| "attention_bias": false, | |
| "attention_dropout": 0.0, | |
| "bos_token_id": 1, | |
| "eos_token_id": 2, | |
| "hidden_act": "silu", | |
| "hidden_size": 4096, | |
| "initializer_range": 0.02, | |
| "intermediate_size": 11008, | |
| "max_position_embeddings": 4096, | |
| "mlp_bias": false, | |
| "model_type": "llama", | |
| "num_attention_heads": 32, | |
| "num_hidden_layers": 32, | |
| "num_key_value_heads": 32, | |
| "pretraining_tp": 1, | |
| "rms_norm_eps": 1e-05, | |
| "rope_scaling": null, | |
| "rope_theta": 10000.0, | |
| "tie_word_embeddings": false, | |
| "torch_dtype": "bfloat16", | |
| "transformers_version": "4.42.3", | |
| "use_cache": false, | |
| "vocab_size": 32000 | |
| } | |
| [INFO|patcher.py:81] 2024-07-11 11:00:12,505 >> Using KV cache for faster generation. | |
| [INFO|modeling_utils.py:3553] 2024-07-11 11:00:12,529 >> loading weights file saves/LLaMA2-7B-Chat/full/train_2024-07-11-09-30-54_llama2_inst_truth/model.safetensors.index.json | |
| [INFO|modeling_utils.py:1531] 2024-07-11 11:00:12,529 >> Instantiating LlamaForCausalLM model under default dtype torch.bfloat16. | |
| [INFO|configuration_utils.py:1000] 2024-07-11 11:00:12,531 >> Generate config GenerationConfig { | |
| "bos_token_id": 1, | |
| "eos_token_id": 2 | |
| } | |
| 07/11/2024 11:00:12 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. | |
| 07/11/2024 11:00:12 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. | |
| 07/11/2024 11:00:12 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. | |
| 07/11/2024 11:00:12 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. | |
| 07/11/2024 11:00:12 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. | |
| 07/11/2024 11:00:12 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. | |
| 07/11/2024 11:00:12 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. | |
| 07/11/2024 11:00:16 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. | |
| 07/11/2024 11:00:16 - INFO - llamafactory.model.loader - all params: 6,738,415,616 | |
| [INFO|modeling_utils.py:4364] 2024-07-11 11:00:16,788 >> All model checkpoint weights were used when initializing LlamaForCausalLM. | |
| 07/11/2024 11:00:16 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. | |
| [INFO|modeling_utils.py:4372] 2024-07-11 11:00:16,788 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at saves/LLaMA2-7B-Chat/full/train_2024-07-11-09-30-54_llama2_inst_truth. | |
| If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. | |
| [INFO|configuration_utils.py:953] 2024-07-11 11:00:16,792 >> loading configuration file saves/LLaMA2-7B-Chat/full/train_2024-07-11-09-30-54_llama2_inst_truth/generation_config.json | |
| [INFO|configuration_utils.py:1000] 2024-07-11 11:00:16,792 >> Generate config GenerationConfig { | |
| "bos_token_id": 1, | |
| "do_sample": true, | |
| "eos_token_id": 2, | |
| "max_length": 4096, | |
| "pad_token_id": 0, | |
| "temperature": 0.6, | |
| "top_p": 0.9 | |
| } | |
| 07/11/2024 11:00:16 - INFO - llamafactory.model.loader - all params: 6,738,415,616 | |
| [INFO|attention.py:80] 2024-07-11 11:00:16,798 >> Using torch SDPA for faster training and inference. | |
| 07/11/2024 11:00:16 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. | |
| 07/11/2024 11:00:16 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. | |
| [INFO|loader.py:196] 2024-07-11 11:00:16,802 >> all params: 6,738,415,616 | |
| 07/11/2024 11:00:16 - INFO - llamafactory.model.loader - all params: 6,738,415,616 | |
| 07/11/2024 11:00:16 - INFO - llamafactory.model.loader - all params: 6,738,415,616 | |
| 07/11/2024 11:00:16 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. | |
| 07/11/2024 11:00:16 - INFO - llamafactory.model.loader - all params: 6,738,415,616 | |
| 07/11/2024 11:00:16 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. | |
| 07/11/2024 11:00:16 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. | |
| 07/11/2024 11:00:16 - INFO - llamafactory.model.loader - all params: 6,738,415,616 | |
| 07/11/2024 11:00:16 - INFO - llamafactory.model.loader - all params: 6,738,415,616 | |
| [INFO|trainer.py:3788] 2024-07-11 11:00:16,914 >> | |
| ***** Running Prediction ***** | |
| [INFO|trainer.py:3790] 2024-07-11 11:00:16,914 >> Num examples = 2554 | |
| [INFO|trainer.py:3793] 2024-07-11 11:00:16,914 >> Batch size = 2 | |
| 07/11/2024 11:00:17 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) | |
| 07/11/2024 11:00:17 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) | |
| [WARNING|logging.py:328] 2024-07-11 11:00:17,582 >> We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) | |
| 07/11/2024 11:00:17 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) | |
| 07/11/2024 11:00:17 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) | |
| 07/11/2024 11:00:17 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) | |
| 07/11/2024 11:00:17 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) | |
| 07/11/2024 11:00:17 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) | |
| [INFO|trainer.py:127] 2024-07-11 11:00:34,679 >> Saving prediction results to saves/LLaMA2-7B-Chat/full/eval_2024-07-11-10-49-45/generated_predictions.jsonl | |