UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
-
Updated
Jan 5, 2026 - Python
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
[NeurIPS 2025] SECA: Semantically Equivalent and Coherent Attacks for Eliciting LLM Hallucinations
RAG Hallucination Detecting By LRP.
Build your own open-source REST API endpoint to detect hallucination in LLM generated responses.
Splinfer is an inference layer built with llama.cpp that helps to better understand uncertainty during generation and hallucination probability.
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
This repository contains the codebase for the PoC of LLM package hallucination and associated vulnerabilties.
Add a description, image, and links to the llm-hallucination topic page so that developers can more easily learn about it.
To associate your repository with the llm-hallucination topic, visit your repo's landing page and select "manage topics."