runtime error
Exit code: 1. Reason: ` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`. tokenizer_config.json: 0%| | 0.00/177k [00:00<?, ?B/s][A tokenizer_config.json: 100%|██████████| 177k/177k [00:00<00:00, 57.5MB/s] tokenizer.json: 0%| | 0.00/9.26M [00:00<?, ?B/s][A tokenizer.json: 100%|██████████| 9.26M/9.26M [00:00<00:00, 160MB/s] special_tokens_map.json: 0%| | 0.00/414 [00:00<?, ?B/s][A special_tokens_map.json: 100%|██████████| 414/414 [00:00<00:00, 2.63MB/s] config.json: 0%| | 0.00/1.14k [00:00<?, ?B/s][A config.json: 100%|██████████| 1.14k/1.14k [00:00<00:00, 6.80MB/s] `rope_scaling`'s original_max_position_embeddings field must be less than max_position_embeddings, got 8192 and max_position_embeddings=2048 model.safetensors.index.json: 0%| | 0.00/157k [00:00<?, ?B/s][A model.safetensors.index.json: 100%|██████████| 157k/157k [00:00<00:00, 98.2MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 11, in <module> model = LlavaForConditionalGeneration.from_pretrained(model_id).to("cuda") File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 279, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4261, in from_pretrained checkpoint_files, sharded_metadata = _get_resolved_checkpoint_files( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1152, in _get_resolved_checkpoint_files checkpoint_files, sharded_metadata = get_checkpoint_shard_files( File "/usr/local/lib/python3.10/site-packages/transformers/utils/hub.py", line 1103, in get_checkpoint_shard_files shard_filenames = sorted(set(index["weight_map"].values())) KeyError: 'weight_map'
Container logs:
Fetching error logs...