-
Notifications
You must be signed in to change notification settings - Fork 31.6k
Description
System Info
transformers version: 4.40.2
Platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31
Python version: 3.10.4
Huggingface_hub version: 0.26.2
Safetensors version: 0.4.5
Accelerate version: 1.1.1
Accelerate config: not found
PyTorch version (GPU?): 2.0.1+cu117 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
Who can help?
No response
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examplesfolder (such as GLUE/SQuAD, ...) - My own task or dataset (give details below)
Reproduction
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('xxx/xxx-1.1', trust_remote_code=True, token=True)
Expected behavior
config.json:
{
...,
"auto_map": {
"AutoConfig": "configuration_xxx.xxxConfig",
"AutoModelForCausalLM": "modeling_xxx.xxxForPrediction"
},
}
When I use the above config and code to load my custom model with auto_map, an error occurs if my model's name contains a .:
ModuleNotFoundError: No module named 'transformers_modules.xxx-1',It seems that the . in the name is mistakenly recognized as a directory. How can this issue be resolved?