|
|
loading from /scratch/spp9399/LLMS/Phi-3.5-mini-instruct |
|
|
layer number: 32, head number 32 |
|
|
You are using a model of type phi3 to instantiate a model of type llama. This is not supported for all configurations of models and can yield errors. |
|
|
Missing required keys in `rope_scaling`: |
|
|
The model was loaded with use_flash_attention_2=True, which is deprecated and may be removed in a future release. Please use `attn_implementation="flash_attention_2"` instead. |
|
|
LlamaForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn |
|
|
- If you |
|
|
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you |
|
|
- If you are not the owner of the model architecture class, please contact the model code owner to update it. |
|
|
Traceback (most recent call last): |
|
|
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/retrieval_head_detection.py", line 622, in <module> |
|
|
ht = LLMNeedleHaystackTester(model_name=model_name, |
|
|
model_name_suffix=args.model_name_suffix, |
|
|
...<7 lines>... |
|
|
haystack_dir=args.haystack_dir |
|
|
) |
|
|
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/retrieval_head_detection.py", line 203, in __init__ |
|
|
self.model_to_test = LlamaForCausalLM.from_pretrained(model_name, |
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^ |
|
|
use_flash_attention_2="flash_attention_2", torch_dtype=torch.bfloat16,device_map= |
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
|
File "/ext3/miniconda3/envs/rh_conda/lib/python3.13/site-packages/transformers/modeling_utils.py", line 279, in _wrapper |
|
|
return func(*args, **kwargs) |
|
|
File "/ext3/miniconda3/envs/rh_conda/lib/python3.13/site-packages/transformers/modeling_utils.py", line 4342, in from_pretrained |
|
|
model = cls(config, *model_args, **model_kwargs) |
|
|
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/faiss_attn/source/modeling_llama.py", line 1292, in __init__ |
|
|
self.model = LlamaModel(config) |
|
|
~~~~~~~~~~^^^^^^^^ |
|
|
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/faiss_attn/source/modeling_llama.py", line 1124, in __init__ |
|
|
[LlamaDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)] |
|
|
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^ |
|
|
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/faiss_attn/source/modeling_llama.py", line 907, in __init__ |
|
|
self.self_attn = LLAMA_ATTENTION_CLASSES[config._attn_implementation](config=config, layer_idx=layer_idx) |
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
|
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/faiss_attn/source/modeling_llama.py", line 487, in __init__ |
|
|
super().__init__(*args, **kwargs) |
|
|
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ |
|
|
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/faiss_attn/source/modeling_llama.py", line 342, in __init__ |
|
|
self._init_rope() |
|
|
~~~~~~~~~~~~~~~^^ |
|
|
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/faiss_attn/source/modeling_llama.py", line 353, in _init_rope |
|
|
scaling_factor = self.config.rope_scaling["factor"] |
|
|
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^ |
|
|
KeyError: |
|
|
|