Skip to content

Error loading model with device_map="auto" for AutoModelForVisualQuestionAnswering in visual-question-answering pipeline #34681

Closed
@chakravarthik27

Description

System Info

  • transformers version: 4.44.2
  • Platform: Windows-10-10.0.22631-SP0
  • Python version: 3.9.13
  • Huggingface_hub version: 0.24.7
  • Safetensors version: 0.4.5
  • Accelerate version: 0.34.2
  • Accelerate config: not found
  • PyTorch version (GPU?): 2.0.1+cpu (False)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using distributed or parallel set-up in script?:

Who can help?

@Rocketknight1

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

from transformers import pipeline

pipe = pipeline("visual-question-answering", model=path, device_map="auto")

Expected behavior

it should be support for all type of models under visual-question-answering pipeline

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions