-
-
Notifications
You must be signed in to change notification settings - Fork 989
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
coomer cannot sleep hard enough maybe needs more advanced sleep set up? #6597
Comments
Even --sleep-request 68 wasn't good enough, still ran into 403 errors... but if sleep request grew? it could sleep for like 10/20/40/80/160/320/640 seconds or something and that'd probably work... edit: --sleep-request 168 failed trying 368 now edit2: coomer just went offline to me so uh, either this issue is irrelevant or mandatory depending on if coomer is now blocking me personally or everyone :) |
went from getting 403 at If after the 403 crash instead of crashing and restarting I could resume on whatever page of the coomer I left off on each page being o=50+X(pages) in case that's not clear from this issue... Usually I just repeat a rip to resume it... but coomer is so fragile today that timing out for ten minutes and then resuming without crashing out would be ideal :(
Surviving this error without crashing out of the command would be ideal... like ideally at least the first once or twice I hit this error I'd love it if gallery-dl just waited for my ---sleep-request timer... in other words... no need to resume if it simple never crashes out... |
Never mind, I'm stupid and broke this functionality in 74d855c
|
oh also uh I'm either ip banned or temp ip banned from coomer :( that explains some of these issues. although uh... it sounds like in some ways the bug you're describing essentially turned my tests of the issue into a mini ddos... I'll wait a few days and see if it works for me again before doing any more complaining about coomer :) |
Using |
@mikf This is not fixed. I think it only makes it last a little longer, but it still eventually fails. It's noticeable on bigger profiles with many pages. Seems to be random sometimes if it fails or not. |
yeah, but that's not on gallery-dl, coomer is dying due to heavy traffic, but with the bugfix on our end we can resume halfway through a rip instead of starting at the start. that's why I marked it closed. I created the issue in the first place because there was literally nothing I could do to work around the issue, but now there is. although maybe the code could be improved further, I don't personally know how
but yeah if you get the 403 error on o=2050 you can resume using that page now |
I am getting standard 403 from coomer even using --sleep-request 8
[coomerparty][debug] Sleeping 7.98 seconds (request) [urllib3.connectionpool][debug] https://coomer.su:443 "GET /api/v1/onlyfans/user/ANYLARGECOOMERRIP?o=2500 HTTP/11" 403 1090 [coomerparty][error] HttpError: '403 Forbidden' for 'https://coomer.su/api/v1/onlyfans/user/ANYLARGECOOMERRIP?o=2500' [coomerparty][debug] Traceback (most recent call last): File "gallery_dl\job.pyc", line 151, in run File "gallery_dl\extractor\kemonoparty.pyc", line 82, in items File "gallery_dl\extractor\kemonoparty.pyc", line 558, in _pagination File "gallery_dl\extractor\kemonoparty.pyc", line 551, in _call File "gallery_dl\extractor\common.pyc", line 244, in request gallery_dl.exception.HttpError: '403 Forbidden' for 'https://coomer.su/api/v1/onlyfans/user/ANYLARGECOOMERRIP?o=2500'
solution? sleep's need a way to grow the more often they occur.... so like
--sleep-request 8 --sleep-request-growth-acceleration 2
or some such... it would either double the seconds to sleep or add 2 seconds whatever method you'd prefer...
This is something rclone does to deal with google drive that's where I got the idea...
The situation this becomes a serious problem is for onlyfans rips that have like 1000s of posts, tiny coomer rips without that many pages won't face this issue.
The text was updated successfully, but these errors were encountered: