shaswat123's picture
Add files using upload-large-folder tool
63d1f56 verified
loading from /scratch/spp9399/LLMS/Phi-3.5-mini-instruct
layer number: 32, head number 32
You are using a model of type phi3 to instantiate a model of type llama. This is not supported for all configurations of models and can yield errors.
Missing required keys in `rope_scaling`: 'factor'
The model was loaded with use_flash_attention_2=True, which is deprecated and may be removed in a future release. Please use `attn_implementation="flash_attention_2"` instead.
LlamaForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
Traceback (most recent call last):
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/retrieval_head_detection.py", line 622, in <module>
ht = LLMNeedleHaystackTester(model_name=model_name,
model_name_suffix=args.model_name_suffix,
...<7 lines>...
haystack_dir=args.haystack_dir
)
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/retrieval_head_detection.py", line 203, in __init__
self.model_to_test = LlamaForCausalLM.from_pretrained(model_name,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
use_flash_attention_2="flash_attention_2", torch_dtype=torch.bfloat16,device_map='balanced').eval()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/ext3/miniconda3/envs/rh_conda/lib/python3.13/site-packages/transformers/modeling_utils.py", line 279, in _wrapper
return func(*args, **kwargs)
File "/ext3/miniconda3/envs/rh_conda/lib/python3.13/site-packages/transformers/modeling_utils.py", line 4342, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/faiss_attn/source/modeling_llama.py", line 1292, in __init__
self.model = LlamaModel(config)
~~~~~~~~~~^^^^^^^^
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/faiss_attn/source/modeling_llama.py", line 1124, in __init__
[LlamaDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/faiss_attn/source/modeling_llama.py", line 907, in __init__
self.self_attn = LLAMA_ATTENTION_CLASSES[config._attn_implementation](config=config, layer_idx=layer_idx)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/faiss_attn/source/modeling_llama.py", line 487, in __init__
super().__init__(*args, **kwargs)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/faiss_attn/source/modeling_llama.py", line 342, in __init__
self._init_rope()
~~~~~~~~~~~~~~~^^
File "/scratch/spp9399/ETNLP/original/Retrieval_Head/faiss_attn/source/modeling_llama.py", line 353, in _init_rope
scaling_factor = self.config.rope_scaling["factor"]
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
KeyError: 'factor'