-
Notifications
You must be signed in to change notification settings - Fork 31.6k
Closed
Labels
Description
System Info
transformersversion: 4.43.0- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.10.9
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?:
Who can help?
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examplesfolder (such as GLUE/SQuAD, ...) - My own task or dataset (give details below)
Reproduction
from transformers import CLIPModel, CLIPTokenizerFast
tokenizer = CLIPTokenizerFast.from_pretrained("patrickjohncyh/fashion-clip")
model = CLIPModel.from_pretrained("patrickjohncyh/fashion-clip")
tokenized = tokenizer(["hello"], return_tensors="pt", padding=True)
print("tokenized", tokenized)
# bus error occurs here
embed = model.get_text_features(**tokenized).detach().cpu().numpy()
print("embedded", tokenized)
gives :
tokenized {'input_ids': tensor([[49406, 3497, 49407]]), 'attention_mask': tensor([[1, 1, 1]])}
zsh: bus error python test_hf.py
I don't think the issue has been posted already.
After bisecting versions, it looks like 4.42.4 does not have the issue and 4.43.0 has the issue
I have little insight to provide except the bus error, and that this does not occur with the clip-vit-base-patch32 model.
I saw some breaking changes in this version release, but only about the tokenizer.
I did not have time to test on a linux distribution yet
Thanks !
Expected behavior
By using the exact same script with the hugging face CLIP pretrained model, the embedding get computed as they should
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
tokenizer = CLIPTokenizerFast.from_pretrained("openai/clip-vit-base-patch32")
emdupre, DNGros and haosenge