Use this model on kaggle not working

#16
by Nirav-Madhani - opened

The default code generated by default has model config errors:

- configuration_magma.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
/tmp/ipykernel_35/589798658.py in <cell line: 0>()
      2 from transformers import pipeline
      3 
----> 4 pipe = pipeline("image-text-to-text", model="microsoft/Magma-8B", trust_remote_code=True)
      5 messages = [
      6     {

/usr/local/lib/python3.11/dist-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, processor, framework, revision, use_fast, token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
    940     if isinstance(model, str) or framework is None:
    941         model_classes = {"tf": targeted_task["tf"], "pt": targeted_task["pt"]}
--> 942         framework, model = infer_framework_load_model(
    943             adapter_path if adapter_path is not None else model,
    944             model_classes=model_classes,

/usr/local/lib/python3.11/dist-packages/transformers/pipelines/base.py in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs)
    303             for class_name, trace in all_traceback.items():
    304                 error += f"while loading with {class_name}, an error is thrown:\n{trace}\n"
--> 305             raise ValueError(
    306                 f"Could not load model {model} with any of the following classes: {class_tuple}. See the original errors:\n\n{error}\n"
    307             )

ValueError: Could not load model microsoft/Magma-8B with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForImageTextToText'>,). See the original errors:

while loading with AutoModelForImageTextToText, an error is thrown:
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/dist-packages/transformers/pipelines/base.py", line 292, in infer_framework_load_model
    model = model_class.from_pretrained(model, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/transformers/models/auto/auto_factory.py", line 574, in from_pretrained
    raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers_modules.microsoft.Magma-8B.b33355b3cffebdf9d8e60207f30a2cb1193b55c0.configuration_magma.MagmaConfig'> for this kind of AutoModel: AutoModelForImageTextToText.
Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, Emu3Config, FuyuConfig, Gemma3Config, GitConfig, GotOcr2Config, IdeficsConfig, Idefics2Config, Idefics3Config, InstructBlipConfig, InternVLConfig, JanusConfig, Kosmos2Config, Llama4Config, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, Mistral3Config, MllamaConfig, PaliGemmaConfig, Pix2StructConfig, PixtralVisionConfig, Qwen2_5_VLConfig, Qwen2VLConfig, ShieldGemma2Config, SmolVLMConfig, UdopConfig, VipLlavaConfig, VisionEncoderDecoderConfig.

Sign up or log in to comment