-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Insights: ollama/ollama
Overview
Could not load contribution data
Please try again later
67 Pull requests merged by 55 people
-
Add Observability section and OpenLIT in README
#7811 merged
Nov 24, 2024 -
Add preliminary support for riscv64
#6627 merged
Nov 23, 2024 -
Add ChatGPTBox and RWKV-Runner to community integrations
#4118 merged
Nov 23, 2024 -
OpenAI: accept additional headers to fix CORS error
#6910 merged
Nov 23, 2024 -
Add Powershell Community Tool
#7438 merged
Nov 23, 2024 -
runner.go: Fix deadlock with many concurrent requests
#7805 merged
Nov 23, 2024 -
server: remove out of date anonymous access check
#7785 merged
Nov 22, 2024 -
tests: fix max queue integration test
#7782 merged
Nov 22, 2024 -
logs: explain client aborts better
#7783 merged
Nov 22, 2024 -
Be quiet when redirecting output
#7360 merged
Nov 22, 2024 -
Added Local Multimodal AI Chat link to README.md
#6931 merged
Nov 22, 2024 -
Update google/uuid module
#7310 merged
Nov 22, 2024 -
added ollamarama-matrix to community integrations
#7325 merged
Nov 22, 2024 -
Adding introduction of x-cmd/ollama module
#5191 merged
Nov 22, 2024 -
Adding: OrionChat: A Web Interface for Seamless AI Conversation
#7084 merged
Nov 21, 2024 -
Delete duplication code Reset()
#7308 merged
Nov 21, 2024 -
docs: remove tutorials, add cloud section to community integrations
#7784 merged
Nov 21, 2024 -
env.sh: cleanup unused RELEASE_IMAGE_REPO
#6855 merged
Nov 21, 2024 -
Update README.md with new terminal tool ParLlama
#5623 merged
Nov 21, 2024 -
Add web management tool to Community Integrations
#7126 merged
Nov 21, 2024 -
Update README.md >> Extension & Plugins :Terraform AWS Ollama & Open WebUI
#5633 merged
Nov 21, 2024 -
Update README.md
#5587 merged
Nov 21, 2024 -
Add Nosia to Community Integrations
#5381 merged
Nov 21, 2024 -
Add Spring AI library reference
#5981 merged
Nov 21, 2024 -
Add a new Golang library
#7191 merged
Nov 21, 2024 -
add integration: py-gpt
#6503 merged
Nov 21, 2024 -
Adding reference to Promptery (Ollama client) to README.md
#7093 merged
Nov 21, 2024 -
Update README.md with node-red-contrib-ollama
#4648 merged
Nov 21, 2024 -
Adds Ollama Grid Search to Community integrations on README
#4301 merged
Nov 21, 2024 -
Add LLPhant to README.md
#5679 merged
Nov 21, 2024 -
Add autogpt integration to list of community integrations
#6459 merged
Nov 21, 2024 -
Update README.md
#5575 merged
Nov 21, 2024 -
Update README.md: Add Ollama-GUI to web & desktop
#5412 merged
Nov 21, 2024 -
Update README.md to add Shinkai Desktop
#4877 merged
Nov 21, 2024 -
docs: add OpenGPA in Readme Web & Desktop
#5497 merged
Nov 21, 2024 -
Update README.md - Library - Haverscript
#6945 merged
Nov 21, 2024 -
Update README.md, Terminal app "bb7"
#7064 merged
Nov 21, 2024 -
Update README.md, Linux AMD ROCm area
#7213 merged
Nov 21, 2024 -
Update README.md
#7221 merged
Nov 21, 2024 -
Add Orbiton to the README.md file
#7770 merged
Nov 21, 2024 -
fix: typo in wintray messages const
#7705 merged
Nov 21, 2024 -
docs: Link to AMD guide on multi-GPU guidance
#7744 merged
Nov 21, 2024 -
KV Cache Fixes
#7767 merged
Nov 20, 2024 -
Add llm-axe to Community Libraries in ReadMe
#5931 merged
Nov 20, 2024 -
Add Swollama links to README.md
#7383 merged
Nov 20, 2024 -
feat: add vibe app to readme
#7607 merged
Nov 20, 2024 -
Update README.md
#7707 merged
Nov 20, 2024 -
Fix minor typo in import.md
#7764 merged
Nov 20, 2024 -
Add Community Integration (Update README.md)
#7746 merged
Nov 20, 2024 -
Update README.md
#7756 merged
Nov 20, 2024 -
Improve crash reporting
#7728 merged
Nov 20, 2024 -
expose underlying error on embedding failure
#7743 merged
Nov 20, 2024 -
fix(runner): Set logits to 0 if false on Batch.Add
#7749 merged
Nov 19, 2024 -
server: allow mixed-case model names on push, pull, cp, and create
#7676 merged
Nov 19, 2024 -
Better error suppression when getting terminal colours
#7739 merged
Nov 19, 2024 -
update the docs
#7731 merged
Nov 19, 2024 -
Update README.md
#7724 merged
Nov 19, 2024 -
Notify the user if systemd is not running during install
#6693 merged
Nov 18, 2024 -
win: add right click menu support
#7727 merged
Nov 18, 2024 -
fix index out of range on zero layer metal load
#7696 merged
Nov 18, 2024 -
readme: improve Community Integrations section
#7718 merged
Nov 18, 2024 -
Witsy + multi-llm-ts in README
#7713 merged
Nov 18, 2024 -
Add Perfect Memory AI to community integrations
#7431 merged
Nov 17, 2024 -
Added ollama-haskell library
#7451 merged
Nov 17, 2024 -
feat: add VT chat app to README
#7706 merged
Nov 17, 2024 -
server: fix warnings in prompt_test.go
#7710 merged
Nov 17, 2024 -
docs: add customization section in linux.md
#7709 merged
Nov 17, 2024
4 Pull requests opened by 4 people
-
openai: fix follow-on messages having "role": "assistant"
#7722 opened
Nov 18, 2024 -
ppc64le: corrected ioctls
#7777 opened
Nov 21, 2024 -
Update README.md
#7818 opened
Nov 24, 2024 -
Bring ollama `fileType`s into alignment with llama.cpp.
#7819 opened
Nov 24, 2024
73 Issues closed by 35 people
-
Strange output behavior between Ollama llama 3.2 11b vs lmsys deployed llama 3.2 11b
#7809 closed
Nov 24, 2024 -
GPU usage is not high, but the display memory is full
#7801 closed
Nov 23, 2024 -
ollama hangs randomly and sometimes responds with G's
#7766 closed
Nov 23, 2024 -
Not reading image files with vision models
#7804 closed
Nov 23, 2024 -
Context length not being updated
#7806 closed
Nov 23, 2024 -
Error: Head "https://localhost:11434/": http: server gave HTTP response to HTTPS client
#7708 closed
Nov 23, 2024 -
AIDC-AI/Marco-o1
#7808 closed
Nov 23, 2024 -
llama3.2-vision:90b unquantized?
#7794 closed
Nov 23, 2024 -
Can't find error log
#7786 closed
Nov 23, 2024 -
What splitter in documents?
#7797 closed
Nov 23, 2024 -
langchain_ollama tool_calls is None
#7799 closed
Nov 23, 2024 -
Models get stuck in stopping state
#7779 closed
Nov 23, 2024 -
Error: POST predict: Post "http://127.0.0.1:35943/completion": EOF
#7733 closed
Nov 22, 2024 -
Outputting the response leaves a bunch of control characters.
#6120 closed
Nov 22, 2024 -
Performance Regression in Ollama 0.4.0 Compared to 0.3.14
#7534 closed
Nov 22, 2024 -
Support installations in non-systemd distros
#7332 closed
Nov 21, 2024 -
Ollama stuck after few runs
#1863 closed
Nov 21, 2024 -
List available models
#2022 closed
Nov 21, 2024 -
API for models on `ollama.com`
#1070 closed
Nov 21, 2024 -
List of all available models
#7751 closed
Nov 21, 2024 -
Add "loaded" status in Model API
#7780 closed
Nov 21, 2024 -
streaming for tools support
#7776 closed
Nov 21, 2024 -
Curl Error: Trying curl request in CLI, but the response is a html
#7772 closed
Nov 21, 2024 -
Update Model Intel/neural-chat
#2662 closed
Nov 21, 2024 -
Support for Whisper-family models
#7233 closed
Nov 21, 2024 -
Ollama pull model without internet when run with docker
#4847 closed
Nov 21, 2024 -
Support Pixtral Large
#7747 closed
Nov 21, 2024 -
Toolcall stream
#7774 closed
Nov 21, 2024 -
What happened with the recent update?
#7762 closed
Nov 20, 2024 -
Model not loaded on all GPUs for load balancing
#7768 closed
Nov 20, 2024 -
error reading llm response:An existing connection was forcibly closed by the remote host.
#6937 closed
Nov 20, 2024 -
Not using GPU after timeout unload of models with Docker image
#7765 closed
Nov 20, 2024 -
The Way to the light
#7759 closed
Nov 20, 2024 -
Performance regression for 0.4.* caused by number of input tokens
#7717 closed
Nov 20, 2024 -
num_ctx does not increase context length above 2048
#7741 closed
Nov 20, 2024 -
How to remove the quantization at model startup
#7753 closed
Nov 20, 2024 -
Open WebUI: Server Connection Error
#6384 closed
Nov 20, 2024 -
Why is the generated content missing when reader 1.5b processes html
#7732 closed
Nov 20, 2024 -
armv7 support
#1926 closed
Nov 20, 2024 -
Performance is decreasing
#7740 closed
Nov 19, 2024 -
ollama 0.4 stops answering on granite3-dense but works with 0.3
#7656 closed
Nov 19, 2024 -
gpu VRAM usage didn't recover within timeout on llama3.2-vision:90b
#7745 closed
Nov 19, 2024 -
Installation script breaks in devcontainer with terminal color issue
#7737 closed
Nov 19, 2024 -
Multi-GPU returning garbage
#7575 closed
Nov 19, 2024 -
Getting no compatible GPUs were discovered yet I have gpu
#7692 closed
Nov 19, 2024 -
ollama update fails to restart systemd service
#7562 closed
Nov 18, 2024 -
Ollama 0.4 not using VRAM on AMD RX 7900 XTX
#7715 closed
Nov 18, 2024 -
Install script not reporting issue with systemd
#6636 closed
Nov 18, 2024 -
On Windows 11 pro, it does work to right click "restart to update"
#7704 closed
Nov 18, 2024 -
Why can't the installation directory be modified
#7720 closed
Nov 18, 2024 -
AMD graphics card encounters a memory usage exception while running on Windows 11.
#7721 closed
Nov 18, 2024 -
Memory leaks after each prompt on 6.11 kernel with nvidia GPU
#7403 closed
Nov 18, 2024 -
Unable to download model llama3.2-vision:11b and request update ollama.
#7651 closed
Nov 18, 2024 -
Proxy does not work for ollama, but does work for curl
#7726 closed
Nov 18, 2024 -
Support for 1-bit LLMs
#7312 closed
Nov 18, 2024 -
Ollama fails to run with ROCm 6.2.2 in Arch packaging
#7564 closed
Nov 18, 2024 -
OpenCoder's template doesn't make sense for an instruct model
#7682 closed
Nov 18, 2024 -
Wrong license of the `minicpm-v` model
#7714 closed
Nov 18, 2024 -
CPU overheating
#7702 closed
Nov 18, 2024 -
Add Nexusflow/Athene-V2-Chat and Nexusflow/Athene-V2-Agent
#7678 closed
Nov 18, 2024 -
Codestral doesn't output correct response
#4713 closed
Nov 17, 2024 -
codegeex4
#5595 closed
Nov 17, 2024 -
Is llava license correct (possibly should be Llama2 not Apache)?
#4561 closed
Nov 17, 2024 -
Splitting layers on macOS gives incorrect output
#3695 closed
Nov 17, 2024 -
Upgrading removes all models
#5591 closed
Nov 17, 2024 -
My Ollama stopped working to transcribe videos.
#5649 closed
Nov 17, 2024
43 Issues opened by 40 people
-
Instant closure when using shell input with piped output.
#7820 opened
Nov 24, 2024 -
Missing ROCm Library Files In ollama-linux-amd64-rocm.tgz
#7817 opened
Nov 24, 2024 -
I import a IQ_4XS model but get an IQ1_M
#7816 opened
Nov 24, 2024 -
Any fine-tuning ways?
#7815 opened
Nov 24, 2024 -
Flag to prevent infinite generation
#7814 opened
Nov 24, 2024 -
Not utilizing GPU
#7813 opened
Nov 24, 2024 -
fetching a list of available models for download?
#7812 opened
Nov 24, 2024 -
could anyone help me? something is not work. use a special gpu
#7810 opened
Nov 23, 2024 -
newer version ollama chat more slower
#7807 opened
Nov 23, 2024 -
problem with ollama serve
#7803 opened
Nov 22, 2024 -
minimum viable GGUF crashes server on run
#7802 opened
Nov 22, 2024 -
Loosing useragent after HTTP redirect while pulling models
#7800 opened
Nov 22, 2024 -
Is this a bug? (2GB model -> up to 20GB pagefile)
#7798 opened
Nov 22, 2024 -
prompt truncated
#7796 opened
Nov 22, 2024 -
Empty output from chat-endpoint / non-empty endpoint for non-chat endpoint
#7795 opened
Nov 22, 2024 -
LLM(vision) GGUF Recommendation: Is there any LLM(vision) with great performance in GGUF format?
#7793 opened
Nov 22, 2024 -
Mistral Large instruct template
#7792 opened
Nov 22, 2024 -
Don't try to parse images property for non image model
#7791 opened
Nov 22, 2024 -
How to prevent Ollama requests to change the running model on Ollama?
#7789 opened
Nov 22, 2024 -
Ollama 0.4.3 ignores HTTPS_PROXY
#7788 opened
Nov 22, 2024 -
How to update ollama desktop on windows?
#7787 opened
Nov 22, 2024 -
Llama3.2 Safetensors adapter not supported?
#7781 opened
Nov 21, 2024 -
tool_choice parameter
#7778 opened
Nov 21, 2024 -
Request: Nexa AI Omnivision
#7769 opened
Nov 20, 2024 -
Ollama on AMD CPU iGPU Windows Hack
#7763 opened
Nov 20, 2024 -
High Inference Time and Limited GPU Utilization with Ollama Docker
#7761 opened
Nov 20, 2024 -
qwen2.5-coder isn't utilizing the GPU
#7760 opened
Nov 20, 2024 -
OLLAMA_MAX_QUEUE does not limit requests to the same model
#7758 opened
Nov 20, 2024 -
Memory usage higher than LM Studio for similar model
#7757 opened
Nov 20, 2024 -
Proper way to train model on my data and load into Ollama?
#7755 opened
Nov 20, 2024 -
300+mb of ram while idle
#7754 opened
Nov 20, 2024 -
Support for LLaVA-o1
#7752 opened
Nov 20, 2024 -
llama3.2-vision model quantization request
#7742 opened
Nov 19, 2024 -
Allow passing file context for FIM tasks on /api/generate
#7738 opened
Nov 19, 2024 -
SSL support (For Brave Leo AI & others)
#7736 opened
Nov 19, 2024 -
docker build error
#7735 opened
Nov 19, 2024 -
Whether the model can be started by using its Id?
#7730 opened
Nov 19, 2024 -
GPU radeon not used
#7729 opened
Nov 19, 2024 -
Can´t use GPU at Ubuntu 22.04 without Docker - permission problems
#7723 opened
Nov 18, 2024 -
Feature suggestions and development compilation environment issues
#7716 opened
Nov 18, 2024 -
Large host RAM allocation when using full gpu offloading
#7711 opened
Nov 17, 2024
93 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
feat: Introduce K/V Context Quantisation (vRAM improvements)
#6279 commented on
Nov 24, 2024 • 48 new comments -
Expose GPU and basic system info
#7262 commented on
Nov 19, 2024 • 5 new comments -
Add Vulkan support to ollama
#5059 commented on
Nov 21, 2024 • 3 new comments -
[Ascend ] add ascend npu support
#5872 commented on
Nov 18, 2024 • 3 new comments -
cmd: print location of model after pushing
#7695 commented on
Nov 23, 2024 • 2 new comments -
[docs] [modelfile.md] num_predict: incorrect default value
#7693 commented on
Nov 18, 2024 • 1 new comment -
Lowercase hostname for CORS.
#5851 commented on
Nov 23, 2024 • 1 new comment -
feat: function calling on stream
#6452 commented on
Nov 21, 2024 • 1 new comment -
Vendor bump llama.cpp
#7670 commented on
Nov 22, 2024 • 0 new comments -
Don't automatically start on startup / have an option to disable this
#162 commented on
Nov 24, 2024 • 0 new comments -
Support GPU runners on CPUs without AVX
#2187 commented on
Nov 24, 2024 • 0 new comments -
Error: POST predict: Post "http://127.0.0.1:42623/completion": EOF
#7640 commented on
Nov 24, 2024 • 0 new comments -
Apple Silicone Neural Engine: Core ML model package format
#3898 commented on
Nov 24, 2024 • 0 new comments -
Potential problems with the `llm/ext_server/server.cpp` not accepting `--ubatch-size ` option
#3554 commented on
Nov 23, 2024 • 0 new comments -
Can we have the newest 1-bit model
#2821 commented on
Nov 23, 2024 • 0 new comments -
Does ollama support accelerated running on npu?
#3004 commented on
Nov 23, 2024 • 0 new comments -
Ollama is not using the 100% of RTX4000 VRAM (18 of 20GB)
#3078 commented on
Nov 22, 2024 • 0 new comments -
MLX backend
#1730 commented on
Nov 22, 2024 • 0 new comments -
ollama.service cannot create folder defined by OLLAMA_MODELS or do not run when the folder is created manually
#2701 commented on
Nov 22, 2024 • 0 new comments -
Pull Private Huggingface Model
#7240 commented on
Nov 22, 2024 • 0 new comments -
Support for Ascend NPU hardware
#5315 commented on
Nov 22, 2024 • 0 new comments -
Integration with MLFlow
#5016 commented on
Nov 22, 2024 • 0 new comments -
add Qwen2-VL
#6564 commented on
Nov 22, 2024 • 0 new comments -
molmo by allen ai support
#6958 commented on
Nov 22, 2024 • 0 new comments -
Add Tab-Enabled Autocomplete for Local Model Parameters in Ollama CLI
#7239 commented on
Nov 22, 2024 • 0 new comments -
Add support for older AMD GPU gfx803, gfx802, gfx805 (e.g. Radeon RX 580, FirePro W7100)
#2453 commented on
Nov 22, 2024 • 0 new comments -
The parameter 'keep_alive' is invalid when cpu (100%)
#7645 commented on
Nov 21, 2024 • 0 new comments -
mac Errors when running
#7495 commented on
Nov 21, 2024 • 0 new comments -
Allow Compile on older GPUs - still on CUDA 11.3
#7615 commented on
Nov 22, 2024 • 0 new comments -
feat: Support Moore Threads GPU
#7554 commented on
Nov 21, 2024 • 0 new comments -
imageproc mllama refactor
#7537 commented on
Nov 19, 2024 • 0 new comments -
build: Make target improvements
#7499 commented on
Nov 23, 2024 • 0 new comments -
boost embed endpoint
#7424 commented on
Nov 21, 2024 • 0 new comments -
runner.go: Use stable llama.cpp sampling interface
#7368 commented on
Nov 21, 2024 • 0 new comments -
FEAT: add rerank support
#7219 commented on
Nov 20, 2024 • 0 new comments -
Update README.md
#7216 commented on
Nov 21, 2024 • 0 new comments -
Fix some typos in documentation, code, code comments etc.
#7021 commented on
Nov 23, 2024 • 0 new comments -
Add support for CC v5 and v6+ multi-gpu cuda setups
#6983 commented on
Nov 23, 2024 • 0 new comments -
Bump ROCm on linux to 6.2
#6969 commented on
Nov 22, 2024 • 0 new comments -
openai: support include_usage stream option to return final usage chunk
#6784 commented on
Nov 19, 2024 • 0 new comments -
AMD integrated graphic on linux kernel 6.9.9+, GTT memory, loading freeze fix
#6282 commented on
Nov 19, 2024 • 0 new comments -
cmd/server: utilizing OS copy to transfer blobs if the server is local
#5887 commented on
Nov 21, 2024 • 0 new comments -
Make llama.cpp's cache_prompt parameter configurable
#5760 commented on
Nov 20, 2024 • 0 new comments -
Add API integration tests
#5678 commented on
Nov 23, 2024 • 0 new comments -
cobra shell completions
#4690 commented on
Nov 19, 2024 • 0 new comments -
Exposing grammar as a request parameter in completion/chat with go-side grammar validation
#4525 commented on
Nov 21, 2024 • 0 new comments -
examples: Update langchain-python-simple
#3591 commented on
Nov 21, 2024 • 0 new comments -
When I run Ollama using AMD 6750GRE 12G I get an error - gfx1031 unsupported by official ROCm on windows
#7694 commented on
Nov 19, 2024 • 0 new comments -
Ability to configure embeddings dimension size
#651 commented on
Nov 19, 2024 • 0 new comments -
Ollama CPU based don't run in a LXC (Host Kernel 6.8.4-3)
#5532 commented on
Nov 19, 2024 • 0 new comments -
GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed
#7590 commented on
Nov 19, 2024 • 0 new comments -
phi-3 small and phi-3 vision missing?
#4646 commented on
Nov 19, 2024 • 0 new comments -
Only CPU is used after rebooting
#7669 commented on
Nov 19, 2024 • 0 new comments -
Support Qwen VL
#2874 commented on
Nov 19, 2024 • 0 new comments -
Ollama Product Stance on Grammar Feature / Outstanding PRs
#6237 commented on
Nov 18, 2024 • 0 new comments -
Add Vulkan runner
#2033 commented on
Nov 18, 2024 • 0 new comments -
Linux ollama 0.4.0 custom compile for AMD ROCm fails missing ggml_rocm in go compile
#7565 commented on
Nov 18, 2024 • 0 new comments -
Basic AI test result inconsistent compared to llama.cpp
#7232 commented on
Nov 18, 2024 • 0 new comments -
Response returns 'null' for 'finish_reason'
#7547 commented on
Nov 18, 2024 • 0 new comments -
KV Cache Quantization
#5091 commented on
Nov 18, 2024 • 0 new comments -
Support Steam Deck Docker amdgpu - gfx1033
#3243 commented on
Nov 18, 2024 • 0 new comments -
Please update NuExtract to v1.5
#7397 commented on
Nov 18, 2024 • 0 new comments -
Support for BGE-Multilingual-Gemma2
#7449 commented on
Nov 18, 2024 • 0 new comments -
Role field should not be repeated in streamed response chunks
#7626 commented on
Nov 18, 2024 • 0 new comments -
`Error: file does not exist` but it exists
#5869 commented on
Nov 18, 2024 • 0 new comments -
CUDA error: out of memory - Llama 3.2 3B on laptop with 13 GB RAM
#7673 commented on
Nov 18, 2024 • 0 new comments -
Keeping the community in the loop
#2231 commented on
Nov 17, 2024 • 0 new comments -
Dynamically determine context window at runtime
#2547 commented on
Nov 17, 2024 • 0 new comments -
detect missing GPU runners and don't report incorrect GPU info/logs
#7597 commented on
Nov 17, 2024 • 0 new comments -
Unable to load images from network fileshares on Windows
#7553 commented on
Nov 17, 2024 • 0 new comments -
Error: pull model manifest: Get
#4976 commented on
Nov 21, 2024 • 0 new comments -
Support Radeon RX 5700 XT (gfx1010)
#2503 commented on
Nov 21, 2024 • 0 new comments -
Streaming for tool calls is unsupported
#5796 commented on
Nov 21, 2024 • 0 new comments -
Support Mistral's new visual model: Pixtral-12b-240910
#6748 commented on
Nov 21, 2024 • 0 new comments -
Teflon (a new part of Mesa on Linux) NPU delegate support
#3498 commented on
Nov 21, 2024 • 0 new comments -
Support `ppc64le` architecture
#796 commented on
Nov 21, 2024 • 0 new comments -
ollama doesn't seem to use my GPU after update
#7622 commented on
Nov 21, 2024 • 0 new comments -
Beam search (best of) for completion API
#1344 commented on
Nov 21, 2024 • 0 new comments -
The fine tuned codegemma model exhibits abnormal performance
#7679 commented on
Nov 21, 2024 • 0 new comments -
Support setting `num_ctx` in openai api via extra query parameter
#7063 commented on
Nov 21, 2024 • 0 new comments -
How to set the parameters to make the ollama model output more detailed and comprehensive answers?
#4352 commented on
Nov 21, 2024 • 0 new comments -
Support partial loads of LLaMA 3.2 Vision 11b on 6G GPUs
#7509 commented on
Nov 21, 2024 • 0 new comments -
Unreliable free memory resulting in models not running
#6918 commented on
Nov 21, 2024 • 0 new comments -
"server stop" and "server status" commands
#3314 commented on
Nov 20, 2024 • 0 new comments -
Clarify JSONL as the Returned Format for Streaming JSON Objects
#7703 commented on
Nov 20, 2024 • 0 new comments -
ollama_models path not working any longer
#7090 commented on
Nov 20, 2024 • 0 new comments -
503 error after using api/generate for some time
#7573 commented on
Nov 20, 2024 • 0 new comments -
Ollama should prevent sleep when working.
#4072 commented on
Nov 20, 2024 • 0 new comments -
Add support for Intel Arc GPUs
#1590 commented on
Nov 20, 2024 • 0 new comments -
Getting Unsupported architecture error When Importing Llama-vision.
#7581 commented on
Nov 20, 2024 • 0 new comments -
Moondream2 needs an update
#6635 commented on
Nov 19, 2024 • 0 new comments -
Validation of Keys and Subkeys in Ollama API JSON Objects
#7653 commented on
Nov 19, 2024 • 0 new comments -
Support AMD GPUs on Intel Macs
#1016 commented on
Nov 19, 2024 • 0 new comments