-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Process finished with exit code -1073741819 (0xC0000005) while trying to infer CodeGemma-2B GGUF #77
Comments
What is your CPU model? I see in the log that the llamacpp binary was build with AVX2 which should support almost all CPUs released in the last 12 years - but still. |
Also seeing quite same problem on i7 11370h |
Can you maybe experiment a bit:
A small nitpick: |
I would recommend to use version |
Some experiments so far |
Ok, and what about using vanilla llamacpp? |
I think this is related to issue #83. I was having the same problem so I tried simplifying my code to the following
after which I got additional info on my output
Full output and generated log file attached below: |
Vanilla one works for me, as well as using ollama |
Hello!
I'm trying to get some answer from CodeGemma 2B gguf, but JVM crashes shortly after start, without producing any model output
GGUF file downloaded from HuggingFace
OS:Windows 10
Code:
`package org.example;
import de.kherud.llama.InferenceParameters;
import de.kherud.llama.LlamaModel;
import de.kherud.llama.LlamaOutput;
import de.kherud.llama.ModelParameters;
import de.kherud.llama.args.MiroStat;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.nio.charset.StandardCharsets;
public class Main {
}`
Program output attached:
log.txt
No JVM crash dump file was generated somewhy
The text was updated successfully, but these errors were encountered: