-
Notifications
You must be signed in to change notification settings - Fork 31.5k
Add retry hf hub decorator #35213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add retry hf hub decorator #35213
Conversation
|
This is a useful feature. I wonder if there is a possibility to use this for monkey patching |
|
@BenjaminBossan I was also thinking if we can instead patching |
|
Also, it looks this could be applied to any function (test, from_pretrained) directly, so
is just |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Right, this could work. Do you know if there is a single method that can be patched that would work for all transformers auto models? |
|
Not a single method though.
|
…ace/transformers into muellerzr-network-retry
Co-authored-by: Lucain <[email protected]>
| Args: | ||
| max_attempts (`int`, *optional*, defaults to 5): | ||
| The maximum number of attempts to retry the flaky test. | ||
| wait_before_retry (`float`, *optional*, defaults to 2): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering if it's better to use exponential backoff instead of a fixed waiting time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can see if fixed gets the job done before overengineering? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also @Wauplin may just put this in huggingface_hub directly so this might (hopefully) just be temporary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe at some point yes but not short-term so please go ahead with this PR :) (cc @hanouticelina for viz')
|
For now going with what's in here (to just get it merged) |
BenjaminBossan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for picking this back up. LGTM.
ydshieh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for finalize this! Hope our life is easier with this 🙏
The test is written in a class that require torch, but that test is super simple not need torch. We can move it later, no big deal.
Let's merge and see how it goes 🚀
What does this PR do?
This PR adds a
@hub_retrydecorator, which should be used on tests that give us any form ofrequests-like error when a download from the hub fails for any reason. It'll then wait a beat then try to rerun it again. I currently have it just located in theModelTesterMixinclass, but as time goes by and see where other places have it pop up, we should use it in those mixins/classes as well.Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ydshieh