Skip to content

AutoModelForDepthEstimation/DepthAnythingDepthEstimationHead unexpected behavior in JIT #34679

@sarmientoF

Description

@sarmientoF

System Info

macos/linux cpu/nvidia gpu

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

from transformers import AutoImageProcessor, AutoModelForDepthEstimation

image_processor = AutoImageProcessor.from_pretrained("depth-anything/Depth-Anything-V2-Small-hf")
model = AutoModelForDepthEstimation.from_pretrained("depth-anything/Depth-Anything-V2-Small-hf")
model.config.return_dict = False

traced_model = torch.jit.optimize_for_inference(
    torch.jit.trace(model, (torch.rand(1, 3, 518 * 1, 518 * 1)), strict=False)
)
traced_model(torch.rand(1, 3, 518 * 2, 518 * 2))[0].shape # output: (1, 3, 518 * 1, 518 * 1)

Expected behavior

traced_model(torch.rand(1, 3, 518 * 2, 518 * 2))[0].shape # output: (1, 3, 518 * 2, 518 * 2)

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions