Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

报错:ModuleNotFoundError: No module named 'flash_attn.flash_attention' #370

Open
Charimanhua opened this issue Nov 27, 2024 · 0 comments

Comments

@Charimanhua
Copy link

运行示例代码:
`import torch
from PIL import Image

import cn_clip.clip as clip
from cn_clip.clip import load_from_name, available_models
print("Available models:", available_models())

Available models: ['ViT-B-16', 'ViT-L-14', 'ViT-L-14-336', 'ViT-H-14', 'RN50']

device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = load_from_name("ViT-B-16", device=device, download_root='./')
model.eval()
image = preprocess(Image.open("examples/pokemon.jpeg")).unsqueeze(0).to(device)
text = clip.tokenize(["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"]).to(device)

with torch.no_grad():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
# 对特征进行归一化,请使用归一化后的图文特征用于下游任务
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)

logits_per_image, logits_per_text = model.get_similarity(image, text)
probs = logits_per_image.softmax(dim=-1).cpu().numpy()

print("Label probs:", probs) # [[1.268734e-03 5.436878e-02 6.795761e-04 9.436829e-01]]时报错:---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[2], line 4
2 from PIL import Image
3 import torch.nn.functional as F
----> 4 import cn_clip.clip as clip
5 from cn_clip.clip import load_from_name
7 # 加载Chinese-CLIP模型和预处理器

File ~/HuahaiRan/Chinese-CLIP-master/cn_clip/clip/init.py:4
1 from .bert_tokenizer import FullTokenizer
3 _tokenizer = FullTokenizer()
----> 4 from .model import convert_state_dict
5 from .utils import load_from_name, available_models, tokenize, image_transform, load

File ~/HuahaiRan/Chinese-CLIP-master/cn_clip/clip/model.py:16
14 import importlib.util
15 if importlib.util.find_spec('flash_attn'):
---> 16 FlashMHA = importlib.import_module('flash_attn.flash_attention').FlashMHA
18 from cn_clip.clip import _tokenizer
19 from cn_clip.clip.configuration_bert import BertConfig

File ~/anaconda3/envs/meme/lib/python3.10/importlib/init.py:126, in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)

ModuleNotFoundError: No module named 'flash_attn.flash_attention'`

但查看环境,

image

image

是安装了对应库的,请问如何解决?谢谢!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant