view post Post 260 Reply DrNicefellow/Qwen-QwQ-32B-Preview-4.25bpw-exl2Rumor has it this is currently the best model for 24 GB VRAM local usage. DrNicefellow/Qwen-QwQ-32B-Preview-4.25bpw-exl2 See translation
view post Post 1050 Reply openGPT-X/Teuken-7B-instruct-research-v0.4New European LLM openGPT-X/Teuken-7B-instruct-research-v0.4 See translation
Performance LLMs - Base Models Qwen/Qwen1.5-0.5B Text Generation • Updated Apr 5 • 308k • 144 stabilityai/stablelm-2-1_6b Text Generation • Updated Jul 10 • 18k • 186 openbmb/MiniCPM-2B-128k Text Generation • Updated May 24 • 696 • 41 stabilityai/stablelm-3b-4e1t Text Generation • Updated Mar 7 • 9.94k • 309
Performance LLMs - Fine tuned KnutJaegersberg/Qwen2-Deita-500m Text Generation • Updated Jun 6 • 6 • 4 KnutJaegersberg/Deita-2b Text Generation • Updated Mar 4 • 74 • 2 microsoft/Phi-3-mini-128k-instruct Text Generation • Updated Aug 20 • 1.06M • 1.61k NousResearch/Hermes-2-Pro-Mistral-7B Text Generation • Updated Sep 8 • 15.6k • 487
KnutJaegersberg/Teuken-7B-instruct-commercial-v0.4-Q4_K_M-GGUF Text Generation • Updated 4 days ago • 100 • 1
KnutJaegersberg/Teuken-7B-instruct-research-v0.4-Q4_K_M-GGUF Text Generation • Updated 4 days ago • 90 • 1
KnutJaegersberg/Teuken-7B-instruct-research-v0.4-8.0bpw-exl2 Text Generation • Updated 4 days ago • 20
KnutJaegersberg/Teuken-7B-instruct-commercial-v0.4-8.0bpw-exl2 Text Generation • Updated 4 days ago • 11
KnutJaegersberg/Teuken-7B-instruct-commercial-v0.4-Q8_0-GGUF Text Generation • Updated 4 days ago • 30
KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format Preview • Updated Sep 4, 2023 • 39 • 3