-
-
Notifications
You must be signed in to change notification settings - Fork 762
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error Messages with local Ollama Models #611
Comments
Seems to be an issue related with litellm now solved. I've also add a weird error message about SSL and I fixed it using this SO response: https://stackoverflow.com/a/76187415 |
Thanks @nerder !
But i now used the exact same version you changed in #616
I am not sure if i want to replace the openssl lib. on my system though, as i am running this native and not in any docker..
|
okay so the issue with the log message:
comes from litelllm not having the exact models in their costmap here: https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json I now have set:
and i only get the URL fatal message, as the link does not exist.
Not sure, but i guess this comes from the rust library cached_path https://crates.io/crates/cached-path But not sure where this might be used in this project at all... |
Hi,
first of all, MANY thanks for building this great app for us!
I was able to get an local ollama installation working with sgpt, ccommands and everything are working fine.
However there are some Issues i cannot get to find in the issues here on github and cannot solve/get rid of it:
When using a model like DEFAULT_MODEL=ollama/llama3.1:latest or gemma2:2b i get the following erros:
I also tried with ollama/llama3.1 and got the same error messages.
Just to be clear, everything seems to work fine, this is just an annoying message :)
However i cannot find any reference to huggingface or the messages in the src code, so i wanted to share this here.
Some Examples:
The text was updated successfully, but these errors were encountered: